How do I rotate a SCNNode to look at another node, whilst keeping its roll and pitch level with the camera? - scenekit

I am using ARKit to direct people to a position in the physical world using an arrow attached to the camera node. Applying a lookAtConstraint to the arrow with a target of a node in the location I want almost does what I need. However, I need to stop the arrow from rolling left and right and pitching up and down. Example of UI

Have you tried setting isGimbalLockEnabled to true to constrain the roll rotation? In the documentation Apple mentions: "For example, when constraining a camera to follow a moving object, setting this property to true ensures that the horizon remains level from the camera’s point of view."
If that still isn't what you're looking for, you may need to write a custom constraint using the class function SCNTransformConstraint.orientationConstraint. You could write the constraint as a secondary constraint that basically restricts rotations on the X and Z axes, or you could write your own look-at constraint with more restrictions (in which case I'd recommend looking at the simd.look(at:) function and then restricting the orientation axes from there).
This should get you most of the way there, or I can add some code in later if it doesn't seem to be working. Good luck!

Related

Azure Maps Line Layer and Pop Ups

I am creating a web page that has an Azure Maps control on it. The purpose of it being a sort of snail trail of movement. I have the map rendering and am using a LineLayer with a SymbolLayer in order to draw a line from point to point and then put an arrow on the line to show movement.
Another requirement is that we are able to hover over the points on the map to see information about that specific point, but I don't seem to be finding much online about "Points" on a line.
Any idea how to access individual points in the Linestring and add attributes to them in order to show a pop up?
To accomplish what you are asking you would need to have a second data source that contains the individual points with the attribute information, then connect this data source to a bubble or symbol layer. By doing this, you can add an event to that layer.
Another, less elegant approach is to have a property in your linestring that has an array of all the attribute information for each point. Then when the user hovers the line, loop through all points in the line and calculate the distance to each point from the mouse pointer location, use the index of the closest point to do a look up in your array.

Accessing or omitting non-existing data

I'm performing some geographical computations in a grid with squares (i.e. regions). I'm using Delphi, but the logic could probably be applied to C++ too. Let me first explain what I want to do.
The following image is a portion of my grid, which is represented by a two-dimensional array Square that denotes the centre point in each square, and the "movement through the layers":
The green square has an X and Y coordinate of 2, so that is Square[2,2]. The actual coordinates are stored in Square[2,2].Latitude and Square[2,2].Longitude as wel as extra information in e.g. Square[2,2].Info that I use for computations.
Now comes the purpose: I need to do some computations on the surrounding areas. How many of the surrounding areas can be called "neighbours", depends on how many "layers" I have defined. In the image above, I used two of these "layers". That means that when starting from the green cell, I go around it once (blue arrows) and then again in the second layer (red arrows).
Now comes the problem: if I would have started in Square[1,1] (green square) instead of Square[2,2] as in the image below, the second layer (in red) would try to access data on the left side and at the bottom that does not exist (i.e. in the "-1" column and row). See the image below. This problem occurs at all borders of course.
I probably can make exceptions with IF-statements for every scenario, but I was wondering if there are common programming "tricks" that can handle such situations where you try to access data does not exist.
For example, I imagine it would be very handy if I can follow the pattern of the arrows depicted in the first image to access all the neighbouring squares every single time, even if there are non-existing squares. So, looking at the first image, after Square[3,0] you'd go to something like Square[3,-1] etc. and then eventually come back into the "feasible" zone in Square[0,3].
To visit neighborhood, you can use some kind of BFS (breadth-first search).
But for sparse structure (like the last picture shows) it is worth to use some data structure to organize cells in a good way. Perhaps kd-tree is suitable - you add all existing cells in the tree and make range search around given cell to get other cells in its vicinity.
Also look at another spatial data structures (see list at the bottom of kd-tree page).

How to find out if the surface detected by ARKit is no more available?

I am working on an application with ARKit and SceneKit frameworks. In my application I have enabled surface detection (I followed the placing objects sample provided by Apple). How to find if the surface detected is no more available? That is, initially only if user has detected the surface in ARSession I am allowing him to place the 3D object.
But if the user moves rapidly or focuses somewhere, the detected surface area is getting lost. In this case if the user tries to place another object I shouldn't allow him to place it until he scans the floor again and get the surface corrected.
Is there any delegate which is available to let us know that the surface detected is no more available?
There are delegate functions that you can use. The delegate is the ARSCNViewDelegate
It has a function that is renderer(_:didRemove:for:) that fires when an ARAnchor has been removed. You can use this function to perform some operation when a surface gets removed.
ARSCNViewDelegate Link
There are two ways to “lose” a surface, so there’s more than one approach to dealing with such a problem.
As noted in the other answer, there’s an ARSCNViewDelegate method that ARKit calls when an anchor is removed from the AR session. However, ARKit doesn’t remove plane anchors during a running session — once it’s detected a plane, it assumes the plane is always there. So that method gets called only if:
You remove the plane anchor directly by passing it to session.remove(anchor:), or
You reset the session by running it again with the .removeExistingAnchors option.
I’m not sure the former is a good idea, but the latter is important to handle, so you probably want your delegate to handle it well.
You can also “lose” a surface by having it pass out of view — for example, ARKit detects a table, and then the user turns around so the camera isn’t pointed at or near the table anymore.
ARKit itself doesn’t offer you any help for dealing with this problem. It gives you all the info you need to do the math yourself, though. You get the plane anchor’s position, orientation, and size, so you can calculate its four corner points. And you get the camera’s projection matrix, so you can check for whether any point is in the viewing frustum.
Since you’re already using SceneKit, though, there are also ways to get SceneKit to do the math for you... Working backwards:
SceneKit gives you an isNode(_:insideFrustumOf:) test, so if you have a SCNNode whose bounding box matches the extent of your plane anchor, you can pass that along with the camera (view.pointOfView) to find out if the node is visible.
To get a node whose bounding box matches a plane anchor, implement the ARSCNViewDelegate didAdd and didUpdate callbacks to create/update an SCNPlane whose position and dimensions match the ARPlaneAnchor’s center and extent. (Don’t forget to flip the plane sideways, since SCNPlane is vertically oriented by default.)
If you don’t want that plane visible in the AR view, set its materials to be transparent.

In SceneKit, how do you make camera look at a specific face of a node's geometry?

In SceneKit, you can add a lookAtConstraint constraint to your SceneView's Point Of View, to make Camera look at a certain node.
Is there a standard way of doing the same but for a specific face of a geometry?
So that, if I touch a specific face of a cube, camera would move so that the Z axis of the camera node gets in line with the normal of the touched face? So that the cube would look like a plane form the new perspective.
No.
That would require movement of the camera, in addition to re-aiming it.
Imagine I'm in front of my house. I have a great view of the front and can just barely see the side to my left. In my Scene I tap the side of the house. A LookAt constraint would merely change the angle of the camera. It would not be aligned with the normal of that barely visible side.
To align with the normal, I'd have to walk around the house until I can stare at the house and be perpendicular to the side I tapped. At what radius? What path? You have to figure that out yourself.
Depending on what effect you're trying for, you might want to rotate the model instead of moving the camera. Rotate the tapped node locally (or as a child of an invisible parent) so that its minus-Z axis points out the tapped face, and keep a lookAtConstraint on the node, not the camera. This approach will change the look of the object, though: you will see it rotating, and the shading changing appropriately.
So that, if I touch a specific face of a cube, camera would move so that the Z axis of the camera node gets in line with the normal of the touched face?
Supposing you are using hit-testing to determine what object got touched, a SCNHitTestResult will give you both localCoordinates and localNormal from which it should be fairly easy to derive a camera transform.
One easy way would be to have the camera as a child node of the box, compute a position that would look like localCoordinates + distance * localNormal and finally a transform using GLKMatrix4MakeLookAt and SCNMatrix4FromGLKMatrix4.
Note that you can also use worldCoordinates, worldNormal, as well as conversion utilities such as SCNNode.convertTransform(_:from:).
mutating on mnuages answer, use a hit test or ray trace to find where the user tapped on the mesh, then add a node at that location, and constrain the camera to lookAt that node.

Show two polygons (wrap them) at low zoom, when showing more than one complete earth

How can I wrap shapes around the world, so that a shape is shown more than once at low zoom?
Example:
I draw a polygon over USA.
I zoom out so that I can see two USA's.
I only see one polygon: ( I want to see two!
The map data effectively has 2 USAs. That implies you should actually want 2 polygons, one of which will be hidden most of the time.
Might as well cater for the worst case and treat a single USA as the exception rather than the rule.
You can't.
As others have already pointed out, the fact that, at far zoom levels certain features get repeated on either side of the map is an unwanted but inevitable side-effect of a projected surface that enables continuous scrolling. This has only been an issue in recent versions of the Bing Maps control - the earlier v6.x control prevented the map from panning across the 180th meridian.
I cannot think of any possible reason why you'd ever want to show two USAs, let alone target data to be positioned on each one. So the solution is to modify either the zoom level at which the map is displayed, or the size of the application window in which it is being displayed so that this situation doesn't occur.

Resources