So I've read a lot about nurbs recently and completely understand nurbs curves ( even wrote a small library for it ). But I'm having some problem with surfaces. I can see that I need two sets of control points. My problem is that what the difference between points in these two sets is?
Can anybody briefly explain it or give me some link that does?
I think my favorite way of understanding NURBS surfaces (if you already understand NURBS curves) is beads on a wire.
So, let's look at the much simpler example of a Bezier surface (I assume if you understand NURBS curves you understand Bezier curves).
A cubic Bezier curve has 4 control points. Imagine a Bezier curve which is just a smooth horizontal curve. You can compute any point on that curve given a parameter value (usually this is called t).. just plug t into the parametric equation of the curve, and a point is produced.
Now imagine you have 4 horizontal Bezier curves, each one is above the other. If you plug the same parameter value into all 4 curves, you get 4 points, one for each curve. Those are the beads on the wires. Let's call the parameter value for the horizontal curves 's'.
Take those 4 "bead" points and treat them as the control points of a vertical curve. Evaluate that curve at another parameter value (this one we'll call 't', like usual) and it will give you a point. That point is on the surface. Specifically, that's the point P(s,t).
So, given a 4x4 grid of control points, you can use beads on a wire to compute points on the surface. As s changes, the beads sweep out different curves along the wire.. the set of all those curves is the surface.
You can do the exact same thing with Nurbs curves.. you just need a knot vector for s, another knot vector for t, and a grid of control points.
For a NURBS surface, you dont need two sets of control points, you need a 2 dimensional grid or mesh of control points. This mesh will have n rows and m columns, and each point in the mesh will have an x, y and z co-ordinate as well as a w value, the NURBS weight for that point.
Related
I have read through the "The Dirty Little Secrets of NURBS" article by Pilot3d (http://www.pilot3d.com/NurbSecrets.htm) and was intrigued by the surface located control points.
It does explain that each control has a respective surface point but it doesn't go as far as to explain how they are found and how moving a surface control point translates to the original control points. If I had to guess, you would find the surface control points by looking for the point on the surface when the contribution from a control point is at its maximum. Not sure about converting changes back to the original control points
I've somewhat figured this out just by thinking about it.
If you consider the general NURBS equation:
Lets say C(u_pi) is the point on the surface associated with a control point (how you decide this is technically arbitrary but it seems that the surface point closest to the control point will produce the best results) and that you would like to move it by a vector M.
So now you need to find the new P_i that takes into account this translation. If we take the general equation and subtract the contributions from all the control points except P_i (the control point we are interested in) then we get the following equation (assuming all weights are 1):
N_i,n * P_i + M = N_i,n * (P_i+P_idelta)
Then we can quite easily see that:
M = N_i,n*P_idelta
And hence you can control the shape of a NURBS surface by moving points on the surface rather than control points. The disadvantage of this method is that nearby surface points will also move but not at the rate. You can quite easily control the spread of the effect by the spreading the delta across several control points.
I am reading NURBS surfaces from a STEP file, as well as their boundary curves. Now I want to tessellate those surfaces.
Every algorithm I have read talks about boundary curves in parametric space, curve with a parameter t, that maps onto a 2D coordinate (u,v), the parametric coordinates of the surface.
The problem is that in STEP file I have the boundary curves defined in world space. My question is: There is an efficient way to transform a curve on a surface from world space to parametric space?
The only way I can think of is to generate lots of points from that curve, and then fit a new curve in parametric space, but I guess that there is a more efficient way to do this, knowing that the curve lies on the surface.
Thanks
If the 3D boundary curves are exactly the 3D mapping of the 2D boundary curves in the parametric domain (u, v), then perhaps there is a better way to compute these 2D boundary curves from the given 3D boundary curve. However, very often this is not the case. For a bi-cubic surface, the exact 3D boundary curve mapped from a 2D boundary curve of degree 3 is of degree 18. So, it is unlikely for any CAD software to represent these 3D boundary curves in exact format. Most of the times, they are just approximations and only lie close enough to the surface within a certain tolerance. So, if you do not have the information for the 2D boundary curves, in general you do need to do curve fitting in the parametric domain. The procedure would be to sample points from 3D curve, project them onto the surface to find corresponding (u,v) values, then do curve fitting from these (u,v) values. Of course, there would be some special cases where you can use simplified algorithms. For example, when the 3D curve matches the isoparametric curves of the surface.
I have read lots of ray tracer algorithm on the web. But, I have no clear understanding of the shading and shadow. Is below pseudocode correct written according to my understanding ?
for each primitive
check for intersection
if there is one
do color be half of the background color
Ishadow = true
break
for each ambient light in environment
calculate light contribution to the color
if ( Ishadow == false )
for each point light
calculate diffuse shading
calculate reflection direction
calculate specular light
trace for reflection ray // (i)
add color returned from i after multiplied by some coefficient
trace for refraction ray // (ii)
add color returned from ii after multiplied by some coefficient
return color value calculated until this point
You should integrate your shadows with the normal ray-tracing path:
For every screen-pixel you send a ray through the scene and you eventually determine the closest object-intersection: at the point of the closest object-intersection you would at first read out the pixel color (texture of the object at that point), aside from calculating reflection-vector etc (using the normal-vector) you would now additionally cast a ray from that intersection-point to each of the light-sources in your scene: if these rays intersect other objects before hitting the light-sources then the intersection-point is in shadow and you can adapt the final color of that point accordingly.
The trouble with pseudocode is that it is easy to get "pseudo" enough that it becomes the same well of ambiguity that we are trying to avoid by getting away from natural languages. "Color be half of the background color?" The fact that this line appears before you iterate through your light sources is confusing. How can you be setting Ishadow before you iterate over light sources?
Maybe a better description would be:
given a ray in space
find nearest object with which ray intersects
for each point light
if normal at surface of intersected object points toward light (use dot product for this)
cast a ray into space from the surface toward the light
if ray intersection is closer than light* light is shadowed at this point
*If you're seeing strange artifacts in your shadows, there is a mistake that is made by every single programmer when they write their first ray tracer. Floating point (or double-precision) math is imprecise and you will frequently (about half the time) re-intersect yourself when doing a shadow trace. The explanation is a bit hard to describe without diagrams, but let me see what I can do.
If you have an intersection point on the surface of a sphere, under most circumstances, that point's representation in a floating point register is not mathematically exact. It is either slightly inside or slightly outside the sphere. If it is inside the sphere and you try to run an intersection test to a light source, the nearest intersection will be the sphere itself. The intersection distance will be very small, so you can simply reject any shadow ray intersection that is closer than, say .000001 units. If your geometry is all convex and incapable of legitimately shadowing itself, then you can simply skip testing the sphere when doing shadow tests.
I want to know whether glRotate rotates the camera, the world axis, or the object. Explain how they are different with examples.
the camera
There is no camera in OpenGL.
the world axis
There is no world in OpenGL.
or the object.
There are no objects in OpenGL.
Confused?
OpenGL is a drawing system, that operates with points, lines and triangles. There is no concept of a scene or a world in OpenGL. All there is are vertices of which each has a set of attributes and there is the state of OpenGL which determines how vertices are turned into pixels.
The very first stage of this process is getting the vertex positions within the viewport. In the fixed function pipeline (i.e. without shaders), to get those, each vertex position if first multiplied with the so called "modelview" matrix, the intermediary result is used for lighting calculations and then multiplied with the "projection" matrix. After that clipping and then normalization into viewport coordinates are applied.
Those two matrices I mentioned save two purposes. The first one "modelview" is used to apply some transformation on the incoming vertices so that those end up in the desired spot relative to the origin. There is no difference in first moving geometry to some place in the world, and then moving the viewpoint within the world. Or keeping the viewpoint at the origin and move the whole world in the opposite. All this can be described by the modelview matrix.
The second one "projection" works together with the normalization process to behave like a kind of "lens", so to speak. With this you set the field of view (and a few other parameters, like shift, which you need for certain applications – don't worry about it).
The interesting thing about matrices is, that they're non-commutative, i.e. for two given matrices N, M
M * N =/= N * M ; for most M, N
This ultimately means, that you can compose a series of transformations A, B, C, D... into one single compound transformation matrix T by multiplying the primitive transformations onto each other in the right order.
The OpenGL matrix manipulation functions (they're obsolete BTW), do just that. You have a matrix selected for manipulation (the matrix mode) for example the modelview M. Then glRotate effectively does this:
M *= R(angle,axis)
i.e. the active matrix gets multiplied on a rotation matrix constructed from the given parameters. Similar for scale and translate.
If this happens to appear to behave like a camera or placing a object depends entirely on how and in which order those manipulations are combined.
But for OpenGL there are just numbers/vectors (vertex attributes), which somehow translate into 2-dimensional viewport coordinates, that get drawn as points for filled inbetween as line or a triangle.
glRotate works on the current matrix. So it depend if the matrix is the camera one or a world trasformation one. To know more about the current matrix have a look at glMatrixMode().
Finding example is just googling: I found this one that in order to me should help you to figure out what's happening.
I'm writting small application to draw diagrams and I need to find points to draw bezier curve between two elements.
Is there any efficient and simple way to calculate bending points ??
To better visualize my problem please take a look at this picture
As You can see I have two rectangles which I want to connect with bezier curve. It is obvious that I have two anchor points, but how can I calculate correctly bending points so that this line would look like at the picture.
On each end of the curve imagine a line perpendicular to the border through the anchor point.
The curve points should be on that line. The farther away from the border these points lie the more vertical the center area of the curve is.
(I hope this is clear, it's at the limit of my english abilities)