Nurbs trimmed surface - nurbs

I am reading NURBS surfaces from a STEP file, as well as their boundary curves. Now I want to tessellate those surfaces.
Every algorithm I have read talks about boundary curves in parametric space, curve with a parameter t, that maps onto a 2D coordinate (u,v), the parametric coordinates of the surface.
The problem is that in STEP file I have the boundary curves defined in world space. My question is: There is an efficient way to transform a curve on a surface from world space to parametric space?
The only way I can think of is to generate lots of points from that curve, and then fit a new curve in parametric space, but I guess that there is a more efficient way to do this, knowing that the curve lies on the surface.
Thanks

If the 3D boundary curves are exactly the 3D mapping of the 2D boundary curves in the parametric domain (u, v), then perhaps there is a better way to compute these 2D boundary curves from the given 3D boundary curve. However, very often this is not the case. For a bi-cubic surface, the exact 3D boundary curve mapped from a 2D boundary curve of degree 3 is of degree 18. So, it is unlikely for any CAD software to represent these 3D boundary curves in exact format. Most of the times, they are just approximations and only lie close enough to the surface within a certain tolerance. So, if you do not have the information for the 2D boundary curves, in general you do need to do curve fitting in the parametric domain. The procedure would be to sample points from 3D curve, project them onto the surface to find corresponding (u,v) values, then do curve fitting from these (u,v) values. Of course, there would be some special cases where you can use simplified algorithms. For example, when the 3D curve matches the isoparametric curves of the surface.

Related

Barycentric coordinates. Harmonic warping of a point relative to a concave polygon in C

I'm trying to get an array of weights that represent the influence a polygon's vertices have on an arbitrary position inside of it. With which I can interpolate the vertices of a deformed version of the polygon and get the corresponding deformed position.
Mean Value and Harmonic warping:
It seems that Harmonic coordinates would do this? My mesh goal:
I don't have easy time reading math papers. I found this Mathlab article, but still not grasping how to process each sampled position relative to the polygon's vertices
Meshlab article
Thanks!
You could try to create a Delaunay triangulation of the polygon and then use Barycentric coordinates within each triangle. This mapping is well defined and continuous, but in most cases probably not smooth (i.e. the derivative is not continuous).

Control points in nurbs surfaces

So I've read a lot about nurbs recently and completely understand nurbs curves ( even wrote a small library for it ). But I'm having some problem with surfaces. I can see that I need two sets of control points. My problem is that what the difference between points in these two sets is?
Can anybody briefly explain it or give me some link that does?
I think my favorite way of understanding NURBS surfaces (if you already understand NURBS curves) is beads on a wire.
So, let's look at the much simpler example of a Bezier surface (I assume if you understand NURBS curves you understand Bezier curves).
A cubic Bezier curve has 4 control points. Imagine a Bezier curve which is just a smooth horizontal curve. You can compute any point on that curve given a parameter value (usually this is called t).. just plug t into the parametric equation of the curve, and a point is produced.
Now imagine you have 4 horizontal Bezier curves, each one is above the other. If you plug the same parameter value into all 4 curves, you get 4 points, one for each curve. Those are the beads on the wires. Let's call the parameter value for the horizontal curves 's'.
Take those 4 "bead" points and treat them as the control points of a vertical curve. Evaluate that curve at another parameter value (this one we'll call 't', like usual) and it will give you a point. That point is on the surface. Specifically, that's the point P(s,t).
So, given a 4x4 grid of control points, you can use beads on a wire to compute points on the surface. As s changes, the beads sweep out different curves along the wire.. the set of all those curves is the surface.
You can do the exact same thing with Nurbs curves.. you just need a knot vector for s, another knot vector for t, and a grid of control points.
For a NURBS surface, you dont need two sets of control points, you need a 2 dimensional grid or mesh of control points. This mesh will have n rows and m columns, and each point in the mesh will have an x, y and z co-ordinate as well as a w value, the NURBS weight for that point.

does glRotate in OpenGL rotate the camera or rotate the world axis or rotate the model object?

I want to know whether glRotate rotates the camera, the world axis, or the object. Explain how they are different with examples.
the camera
There is no camera in OpenGL.
the world axis
There is no world in OpenGL.
or the object.
There are no objects in OpenGL.
Confused?
OpenGL is a drawing system, that operates with points, lines and triangles. There is no concept of a scene or a world in OpenGL. All there is are vertices of which each has a set of attributes and there is the state of OpenGL which determines how vertices are turned into pixels.
The very first stage of this process is getting the vertex positions within the viewport. In the fixed function pipeline (i.e. without shaders), to get those, each vertex position if first multiplied with the so called "modelview" matrix, the intermediary result is used for lighting calculations and then multiplied with the "projection" matrix. After that clipping and then normalization into viewport coordinates are applied.
Those two matrices I mentioned save two purposes. The first one "modelview" is used to apply some transformation on the incoming vertices so that those end up in the desired spot relative to the origin. There is no difference in first moving geometry to some place in the world, and then moving the viewpoint within the world. Or keeping the viewpoint at the origin and move the whole world in the opposite. All this can be described by the modelview matrix.
The second one "projection" works together with the normalization process to behave like a kind of "lens", so to speak. With this you set the field of view (and a few other parameters, like shift, which you need for certain applications – don't worry about it).
The interesting thing about matrices is, that they're non-commutative, i.e. for two given matrices N, M
M * N =/= N * M ; for most M, N
This ultimately means, that you can compose a series of transformations A, B, C, D... into one single compound transformation matrix T by multiplying the primitive transformations onto each other in the right order.
The OpenGL matrix manipulation functions (they're obsolete BTW), do just that. You have a matrix selected for manipulation (the matrix mode) for example the modelview M. Then glRotate effectively does this:
M *= R(angle,axis)
i.e. the active matrix gets multiplied on a rotation matrix constructed from the given parameters. Similar for scale and translate.
If this happens to appear to behave like a camera or placing a object depends entirely on how and in which order those manipulations are combined.
But for OpenGL there are just numbers/vectors (vertex attributes), which somehow translate into 2-dimensional viewport coordinates, that get drawn as points for filled inbetween as line or a triangle.
glRotate works on the current matrix. So it depend if the matrix is the camera one or a world trasformation one. To know more about the current matrix have a look at glMatrixMode().
Finding example is just googling: I found this one that in order to me should help you to figure out what's happening.

How can I test if a point lies within a 3d shape with its surface defined by a point cloud?

I have a collection of points which describe the surface of a shape that should be roughly spherical, and I need a method with which to determine if any other given point lies within this shape. I've previously been approximating the shape as an exact sphere, but this has proven too inaccurate and I need a more accurate method. Simplicity and speed is favourable over complete accuracy, a good approximation will suffice.
I've come across techniques for converting a point cloud to a 3d mesh, but most things I have found have been very complicated, and I am looking for something as simple as possible.
Any ideas?
What if you computed the centroid of the cloud, and converted its coordinates to a polar system whose origin is that centroid.
Then, convert the point you want to examine to the same coordinate system.
Assuming the surface is representable by a Delaunay triangulation, determine the three points with the smallest difference in angle from the point you're examining.
Project the point you're examining onto the triangle determined by those three points, and see if the distance of the projected point from the centroid is larger than the distance of the actual point.
Essentially, you're constructing a triangular mesh of the convex hull, but as-needed one triangle at a time. If execution speed really matters, you might cache the resulting triangles as you go.
Steven Sudit has also suggested a useful optimization that I'd recommend if you go down this path.
I think Bill Carey's method is on the right track, but I do want to suggest a possible optimization.
Since the shape is roughly spherical, you can pre-calculate the radius of the sphere bound by it and of the sphere that bounds it. This way, if the distance of the point is within the smaller sphere, it's a definite hit and if it's outside the outer sphere, it's a definite miss.
This ought to let you resolve the easy cases very quickly. For the harder ones, Carey's method takes over.
Use a kd-tree.
http://en.wikipedia.org/wiki/Kd-tree
The article provides a good explanation.
I can clear up any further misunderstandings.

Transform shape built of contour splines to simple polygons

I've dumped glyphs from truetype file so I can play with them. They have shape contours that consist from quadratic bezier curves and lines. I want to output triangles for such shapes so I can visualize them for the user.
Traditionally I might use libfreetype or scan-rasterise this kind of contours. But I want to produce extruded 3D meshes from the fonts and make other distortions with them.
So, how to polygonise shapes consisting from quadratic bezier curves and lines? There's many contours that form the shape together. Some contours are additive and others are subtractive. The contours are never open. They form a loop.
(Actually, I get only contour vertices from ttf glyphs, those vertices define whether they are part of the curve or not. Even though it is easy to decompose these into bezier curves and lines, knowing the data is represented this way may be helpful for polygonizing the contours to triangles)
This is simple. You need to implement boolean operations on your curves, then proceed by joining pairs of curves untily you are left with a single curve.
First, you need to evaluate the curves and convert them to polylines.
Then you need to make sure that there is a vertex at every place where two contours intersect (this part can actually be trying, due to the numeric errors; you can use the Bentley-Ottmann algorithm).
Finally, all you need to do is to traverse the curves and connect them in the correct order to perform the boolean operation, producing weakly simple polygons.
Such polygons can be triangulated using e.g. ear clipping algorithm (which is slow, but rather simple to implement).
Hope this helps ...

Resources