In SceneKit, how do you make camera look at a specific face of a node's geometry? - scenekit

In SceneKit, you can add a lookAtConstraint constraint to your SceneView's Point Of View, to make Camera look at a certain node.
Is there a standard way of doing the same but for a specific face of a geometry?
So that, if I touch a specific face of a cube, camera would move so that the Z axis of the camera node gets in line with the normal of the touched face? So that the cube would look like a plane form the new perspective.

No.
That would require movement of the camera, in addition to re-aiming it.
Imagine I'm in front of my house. I have a great view of the front and can just barely see the side to my left. In my Scene I tap the side of the house. A LookAt constraint would merely change the angle of the camera. It would not be aligned with the normal of that barely visible side.
To align with the normal, I'd have to walk around the house until I can stare at the house and be perpendicular to the side I tapped. At what radius? What path? You have to figure that out yourself.
Depending on what effect you're trying for, you might want to rotate the model instead of moving the camera. Rotate the tapped node locally (or as a child of an invisible parent) so that its minus-Z axis points out the tapped face, and keep a lookAtConstraint on the node, not the camera. This approach will change the look of the object, though: you will see it rotating, and the shading changing appropriately.

So that, if I touch a specific face of a cube, camera would move so that the Z axis of the camera node gets in line with the normal of the touched face?
Supposing you are using hit-testing to determine what object got touched, a SCNHitTestResult will give you both localCoordinates and localNormal from which it should be fairly easy to derive a camera transform.
One easy way would be to have the camera as a child node of the box, compute a position that would look like localCoordinates + distance * localNormal and finally a transform using GLKMatrix4MakeLookAt and SCNMatrix4FromGLKMatrix4.
Note that you can also use worldCoordinates, worldNormal, as well as conversion utilities such as SCNNode.convertTransform(_:from:).

mutating on mnuages answer, use a hit test or ray trace to find where the user tapped on the mesh, then add a node at that location, and constrain the camera to lookAt that node.

Related

Is it possible to drag-snap a point of a shape being edited by the drawing manager to another shape's point location?

I'd like a user to be able to draw a polygon using the Azure Maps Drawing Manager and have the ability to move a point of the polygon to near one of another polygon's points and have the dragged point snap to the same location such that the resulting 2 points would be the same.
I know there is snap capability with a grid but don't see a sample for this behaviour?
The ultimate goal is to prevent polygon overlaps, assuming the intersecting shared line of adjoining shapes is excluded from determination of which polygon a point resides within.
I can allow a user to manually draw and get as close as possible of course, and provide some assertion to confirm no polygons overlap but would additionally like a nice snap-to-point experience if possible.
You can find hundreds of samples for Azure Maps here: https://samples.azuremaps.com/
As you noted, the snapping grid is likely the best place to start in your scenario. Here are some specific samples of this:
https://samples.azuremaps.com/?sample=use-a-snapping-grid
https://samples.azuremaps.com/?sample=snap-grid-options
The following sample is an example of a custom snapping scenario where the routing service is used to snap a drawn line to a route (the route part can be swapped out for custom logic): https://samples.azuremaps.com/?sample=snap-drawn-line-to-roads

How to find out if the surface detected by ARKit is no more available?

I am working on an application with ARKit and SceneKit frameworks. In my application I have enabled surface detection (I followed the placing objects sample provided by Apple). How to find if the surface detected is no more available? That is, initially only if user has detected the surface in ARSession I am allowing him to place the 3D object.
But if the user moves rapidly or focuses somewhere, the detected surface area is getting lost. In this case if the user tries to place another object I shouldn't allow him to place it until he scans the floor again and get the surface corrected.
Is there any delegate which is available to let us know that the surface detected is no more available?
There are delegate functions that you can use. The delegate is the ARSCNViewDelegate
It has a function that is renderer(_:didRemove:for:) that fires when an ARAnchor has been removed. You can use this function to perform some operation when a surface gets removed.
ARSCNViewDelegate Link
There are two ways to “lose” a surface, so there’s more than one approach to dealing with such a problem.
As noted in the other answer, there’s an ARSCNViewDelegate method that ARKit calls when an anchor is removed from the AR session. However, ARKit doesn’t remove plane anchors during a running session — once it’s detected a plane, it assumes the plane is always there. So that method gets called only if:
You remove the plane anchor directly by passing it to session.remove(anchor:), or
You reset the session by running it again with the .removeExistingAnchors option.
I’m not sure the former is a good idea, but the latter is important to handle, so you probably want your delegate to handle it well.
You can also “lose” a surface by having it pass out of view — for example, ARKit detects a table, and then the user turns around so the camera isn’t pointed at or near the table anymore.
ARKit itself doesn’t offer you any help for dealing with this problem. It gives you all the info you need to do the math yourself, though. You get the plane anchor’s position, orientation, and size, so you can calculate its four corner points. And you get the camera’s projection matrix, so you can check for whether any point is in the viewing frustum.
Since you’re already using SceneKit, though, there are also ways to get SceneKit to do the math for you... Working backwards:
SceneKit gives you an isNode(_:insideFrustumOf:) test, so if you have a SCNNode whose bounding box matches the extent of your plane anchor, you can pass that along with the camera (view.pointOfView) to find out if the node is visible.
To get a node whose bounding box matches a plane anchor, implement the ARSCNViewDelegate didAdd and didUpdate callbacks to create/update an SCNPlane whose position and dimensions match the ARPlaneAnchor’s center and extent. (Don’t forget to flip the plane sideways, since SCNPlane is vertically oriented by default.)
If you don’t want that plane visible in the AR view, set its materials to be transparent.

SceneKit, fixing a light's position

I would like to allow the user to rotate the scene by touch but have the lighting remain fixed. This works quite well using the default camera and default lighting. However, the default light is "straight on", i.e. along the screen's -z axis. I would rather it be directed at an angle more like a stage light, say from the front upper right.
But when I create my own light it appears that it needs to be attached to an existing node, the rootNode for example. When this is done, the light then rotates around with the model as the user manipulates the scene.
Is there a simple way to keep the lighting fixed while rotating with the default camera or do I need to get seriously involved creating a custom camera?
The lighting is already "fixed": that is, each light source keeps its position and direction within the scene unless you do something to change it. But it sounds like instead, you want to have a light that is fixed relative to a camera.
To achieve this, don't attach the light to the scene's root node. Instead, attach it to the same node that the camera is attached to. Or if you want to adjust the light's position relative to the camera, you could construct a small node tree, with one leaf containing the camera and the other leaf containing a directional light.
You'll almost always want to create your own camera or cameras in SceneKit. The default user-manipulable camera is useful for quickly getting up and running, and debugging, but not something that you want to expose to end users.

Make aSCNParticle's orientation match surface/vertex release

Is it possible to make particles being released from the surface of a geometry object (or from its vertices) push them out at an angle reflective/representative of the direction of travel?
eg. If the emitter object is a cube, and particles are moving out from each of the cube's 6 faces, the particles face exactly as the face that they're coming "off" from.
I've only been able to get them to move out correctly from the faces/vertices, but all the particles are aligned to the camera, screen or "free", in all cases they're essentially only facing one direction, not the six that they could/should if they each took on the angle of their origin and direction of travel from the faces/vertices of the cube.
What I want, is something like this behaviour from the particles emitting from the object (a cube in this example, but the principles the same for any kind of object).
EDIT:: above is just an example.
Imagine this on a much grander scale, not like the below, but it will give you somewhat of an idea of the goal, though even MORE:
You can use 6 emitters with orientation mode set to "SCNParticleOrientationModeFree" and set it to local=YES. then control the orientation with the node that own the emitter.

OpenGL: How to drag image and move it to the line by using the mouse

I want to drag an image to one line by using the mouse and when the image is close to the line, the image will automatically move on to the line, like some "floor planner" program ------------you create wall and drag the door to this wall and when the door is close to the wall, the door will automatically show up on the wall.
Can OpenGL do it?
if it can, can anyone tell me how? If it can not, can anyone tell me how I can do it?
Show me an example.
OpenGL is a rendering API, it's purpose is to generate rasterized images based on descriptions provided to it by an application.
It knows nothing about user input, and even less about the application's "domain objects" such as doors, walls, and so on. All it deals with is abstract coordinates and matrices that describe the transforms and projections to take those 3D coordinates into 2D for rasterization, as well as shading for surfaces and so on.
So, it's up to you to implement that, so that the coordinates you eventually pass to OpenGL end up being what you want them to be.
Snapping is typically a combination of measuring the distance to some guiding object, and the following quantization of the input coordinates to correspond to the the guide.

Resources