I'm trying to make a node rotate along the X axis to look at another node.
I tried using the SCNLookAtConstraint with the gimbal turned off, but this still allows the node to rotate on both the X and Y axes. (Also, it makes the rear of the node face the target, not the front.)
How do I calculate how to rotate one node to face another from two vector 3 positions?
The docs talk about the orientation of a node, and what it means to 'look' at another node:
A node points in the direction of the negative z-axis of its local coordinate system. This axis defines the view direction for nodes containing cameras and the lighting direction for nodes containing spotlights or directional lights, as well as the orientation of the node’s geometry and child nodes. When Scene Kit evaluates a look-at constraint, it updates the constrained node’s transform property so that the node’s negative z-axis points toward the constraint’s target node.
You can modify that by specifying a different value for constraint.localFront, such as SCNVector3(0,0,1) to point with the positive z-axis instead of the negative.
If you need more control over which axis you want to engage, then you have a couple of options.
Create an invisible target node that remains positioned along the plane perpendicular to your rotation axis
Instead of using constraints, use SCNNode.look(at:) to update the node within your game loop, providing a translated target coordinate along the perpendicular plane.
If the target is the camera, then checkout SCNBillboardConstraint, which allows you to specify which axes are free to rotate:
let billBoard = SCNBillboardConstraint()
billBoard.freeAxes = [.X]
node.constraints = [billBoard]
Related
I'm trying to get an array of weights that represent the influence a polygon's vertices have on an arbitrary position inside of it. With which I can interpolate the vertices of a deformed version of the polygon and get the corresponding deformed position.
Mean Value and Harmonic warping:
It seems that Harmonic coordinates would do this? My mesh goal:
I don't have easy time reading math papers. I found this Mathlab article, but still not grasping how to process each sampled position relative to the polygon's vertices
Meshlab article
Thanks!
You could try to create a Delaunay triangulation of the polygon and then use Barycentric coordinates within each triangle. This mapping is well defined and continuous, but in most cases probably not smooth (i.e. the derivative is not continuous).
In SceneKit, you can add a lookAtConstraint constraint to your SceneView's Point Of View, to make Camera look at a certain node.
Is there a standard way of doing the same but for a specific face of a geometry?
So that, if I touch a specific face of a cube, camera would move so that the Z axis of the camera node gets in line with the normal of the touched face? So that the cube would look like a plane form the new perspective.
No.
That would require movement of the camera, in addition to re-aiming it.
Imagine I'm in front of my house. I have a great view of the front and can just barely see the side to my left. In my Scene I tap the side of the house. A LookAt constraint would merely change the angle of the camera. It would not be aligned with the normal of that barely visible side.
To align with the normal, I'd have to walk around the house until I can stare at the house and be perpendicular to the side I tapped. At what radius? What path? You have to figure that out yourself.
Depending on what effect you're trying for, you might want to rotate the model instead of moving the camera. Rotate the tapped node locally (or as a child of an invisible parent) so that its minus-Z axis points out the tapped face, and keep a lookAtConstraint on the node, not the camera. This approach will change the look of the object, though: you will see it rotating, and the shading changing appropriately.
So that, if I touch a specific face of a cube, camera would move so that the Z axis of the camera node gets in line with the normal of the touched face?
Supposing you are using hit-testing to determine what object got touched, a SCNHitTestResult will give you both localCoordinates and localNormal from which it should be fairly easy to derive a camera transform.
One easy way would be to have the camera as a child node of the box, compute a position that would look like localCoordinates + distance * localNormal and finally a transform using GLKMatrix4MakeLookAt and SCNMatrix4FromGLKMatrix4.
Note that you can also use worldCoordinates, worldNormal, as well as conversion utilities such as SCNNode.convertTransform(_:from:).
mutating on mnuages answer, use a hit test or ray trace to find where the user tapped on the mesh, then add a node at that location, and constrain the camera to lookAt that node.
I'm making a game in OpenGL, using freeglut.
I have a car, which I am able to move back and forward using keys and the camera follows it. Now, when I turn the car(glRotate in xz plane), I want the camera to change the Camera position(using gluLookAt) so it always points to the back of the car.
Any suggestions how do I do that?
For camera follow I use the object transform matrix
get object transform matrix
camera=object
use glGetMatrix or whatever for that
shift rotate the position so Z axis is directing where you want to look
I use object aligned to forward on Z axis, but not all mesh models are like this so rotate by (+/-)90 deg around x,y or z to match this:
Z-axis is forward (or backward depends on your projection matrix and depth function)
X-axis is Right
Y-axis is Up
with respect to your camera/screen coordinate system (projection matrix). Then translate to behind position
apply POV rotation (optional)
if you can slightly rotate camera view from forward direction (mouse look) then do it at this step
camera*=rotation_POV
convert matrix to camera
camera matrix is usually inverse of the coordinate system matrix it represents so:
camera=Inverse(camera)
For more info look here understanding transform matrices the OpenGL inverse matrix computation in C++ is included there.
I've been struggling for quite a while with this seemingly simple problem. I am given a set of points (which I have further simplified down to a convex hull) and my task is to find a rectangle (not necessarily axis-aligned) that encompasses all of them, has no extra space around (so that it is tight-fitting around the points) and has the maximum possible perimeter. It was no trouble for me to find the minimal one, but this has proven to be a tougher nut to crack. When searching for the minimal bounding rectangle, I was able to use the assumption that one of the rectangle's sides was always aligned with one of the hull's sides, but here I don't see any such case here. Am I missing something painfully obvious? The only way I could come up so far is to test antipodal pairs of points if they can project onto the sides of the rectangle and use some trig to maximize the function, but I just lost myself in the calculations.
Thanks in advance!
First, compute the convex hull of your point set.
Now, think about spinning the polygon around and computing the smallest enclosing axis-aligned rectangle. Notice that the top point, the left point, the right point, and the bottom point will proceed clockwise around the convex hull from one vertex to the next.
You can't try every possible angle explicitly. You can, however, do a sweep-line trick. Given an angle, though, you can compute the top, left, bottom, and right points after spinning the polygon by that angle as well as the first of the top, left, bottom, and right points to change identity as you continue rotating the polygon. So you get a range of angles for which your current choices of top, left, bottom, and right are correct; further, you know what the next correct choice of top, left, bottom, and right is.
For each legitimate choice of top, left, bottom, and right, You wind up having to compute the maximum value of a*sin(theta) + b*cos(theta) for fixed a and b over some range of theta. Recall from trig that a*sin(theta) + b*cos(theta) = sqrt(a^2+b^2) cos(theta - arctan(b/a)). You evaluate the function at the boundaries of your interval and where the derivative is zero (at arctan(b/a) plus any integer times pi) and you're golden.
I want to know whether glRotate rotates the camera, the world axis, or the object. Explain how they are different with examples.
the camera
There is no camera in OpenGL.
the world axis
There is no world in OpenGL.
or the object.
There are no objects in OpenGL.
Confused?
OpenGL is a drawing system, that operates with points, lines and triangles. There is no concept of a scene or a world in OpenGL. All there is are vertices of which each has a set of attributes and there is the state of OpenGL which determines how vertices are turned into pixels.
The very first stage of this process is getting the vertex positions within the viewport. In the fixed function pipeline (i.e. without shaders), to get those, each vertex position if first multiplied with the so called "modelview" matrix, the intermediary result is used for lighting calculations and then multiplied with the "projection" matrix. After that clipping and then normalization into viewport coordinates are applied.
Those two matrices I mentioned save two purposes. The first one "modelview" is used to apply some transformation on the incoming vertices so that those end up in the desired spot relative to the origin. There is no difference in first moving geometry to some place in the world, and then moving the viewpoint within the world. Or keeping the viewpoint at the origin and move the whole world in the opposite. All this can be described by the modelview matrix.
The second one "projection" works together with the normalization process to behave like a kind of "lens", so to speak. With this you set the field of view (and a few other parameters, like shift, which you need for certain applications – don't worry about it).
The interesting thing about matrices is, that they're non-commutative, i.e. for two given matrices N, M
M * N =/= N * M ; for most M, N
This ultimately means, that you can compose a series of transformations A, B, C, D... into one single compound transformation matrix T by multiplying the primitive transformations onto each other in the right order.
The OpenGL matrix manipulation functions (they're obsolete BTW), do just that. You have a matrix selected for manipulation (the matrix mode) for example the modelview M. Then glRotate effectively does this:
M *= R(angle,axis)
i.e. the active matrix gets multiplied on a rotation matrix constructed from the given parameters. Similar for scale and translate.
If this happens to appear to behave like a camera or placing a object depends entirely on how and in which order those manipulations are combined.
But for OpenGL there are just numbers/vectors (vertex attributes), which somehow translate into 2-dimensional viewport coordinates, that get drawn as points for filled inbetween as line or a triangle.
glRotate works on the current matrix. So it depend if the matrix is the camera one or a world trasformation one. To know more about the current matrix have a look at glMatrixMode().
Finding example is just googling: I found this one that in order to me should help you to figure out what's happening.