Place 3D object using iOS 11 ARKit(Scenekit) only if proper horizontal plane is detected - scenekit

I developed a POC using iOS 11 ARKit(Scenekit- https://developer.apple.com/sample-code/wwdc/2017/PlacingObjects.zip). When i try to place a 3D object using camera, it detects horizontal planes and placing it.
But i am able to place the object anywhere(like in the air) even if it doesn't detect proper horizontal plane which is not an ideal scenario.
Is it possible to restrict user such that the object can be placed only in valid horizontal planes like floor and not anywhere else? That is i want to detect the level of the floor and place objects only above valid planes and not on the air.

You can limit ARHitTestResult of user touch event by specifying hit test option ARHitTestResultTypeExistingPlaneUsingExtent.
CGPoint tapPoint = [recognizer locationInView:self.sceneView];
NSArray<ARHitTestResult *> *result = [self.sceneView hitTest:tapPoint types:ARHitTestResultTypeExistingPlaneUsingExtent];
Then place object selected at position specifying in ARHitTestResult.worldTransform.
This will ensure user can only place object on planes ARKit has detected.

Related

Is it possible to drag-snap a point of a shape being edited by the drawing manager to another shape's point location?

I'd like a user to be able to draw a polygon using the Azure Maps Drawing Manager and have the ability to move a point of the polygon to near one of another polygon's points and have the dragged point snap to the same location such that the resulting 2 points would be the same.
I know there is snap capability with a grid but don't see a sample for this behaviour?
The ultimate goal is to prevent polygon overlaps, assuming the intersecting shared line of adjoining shapes is excluded from determination of which polygon a point resides within.
I can allow a user to manually draw and get as close as possible of course, and provide some assertion to confirm no polygons overlap but would additionally like a nice snap-to-point experience if possible.
You can find hundreds of samples for Azure Maps here: https://samples.azuremaps.com/
As you noted, the snapping grid is likely the best place to start in your scenario. Here are some specific samples of this:
https://samples.azuremaps.com/?sample=use-a-snapping-grid
https://samples.azuremaps.com/?sample=snap-grid-options
The following sample is an example of a custom snapping scenario where the routing service is used to snap a drawn line to a route (the route part can be swapped out for custom logic): https://samples.azuremaps.com/?sample=snap-drawn-line-to-roads

Problem with Blenders Array Modifier to orbit or place in circular order (blender 3.0)

I was having the problem mentioned above and found that many of the tutorials are mentioning implementing the procedure on the centre of the canvas or world origin.
The question is how to perform that in any location successfully?
If you are familiar with blender the simple answer for you is - keep the Origins at the same place for both Centre and Orbital objects and make sure to apply the transforms.
For others-
Shift + Right-click to move the 3d Cursor to the desired location.
Create the Object that is to be Orbited or rotated.
Shape it as desired, then selecting that press Ctrl + A and press All Transforms,that will move it's origin to the world Origin, to correct that In object mode select the object and Rightclick then select Set Origin. and select the centre of mass to keep it in a visible field. (You can always edit your object in edit mode that will not destroy your array but minor modification will be required. ) (centring the origin isn't mandatory at this step, described to inform the process)
Place the 3d cursor at the point of the Orbital object from which it should face the centre. (to perform in complex object go to edit mode and select with vertex or edge or face selection and press Shift+S and select "Cursor to selected")
Having the cursor at desired point Create a cube or anything that will help to visualise the rotation. The cube will be exactly placed on the 3d cursor as well as its origin. Scale it as required. and apply all transforms as previous and reposition the origin to the 3D cursor.
Select the Orbital object and set its origin to the 3d cursor too.
If required provide some spacing to the Orbital object from the centre object (cube).
(but it will be helpful to visualize if you do that after creating the array, going to the edit mode of the orbital object)
Selecting the Orbital object assign Array modifier. Provide the desired number of objects under Count and make all axis of Relative offset 0 to get all orbital objects in the same plane. (relative offset comes pre-selected)
Tick Object Offset and with the Dropper select the Cube.
Now if the array doses strange things like the image below
It means either both of your Center and Orbital objects don't have origin At the same point (here 3d cursor) or You forgot to Apply the transforms for the objects.
Now you can rotate the cube to get the desired arrangement in circular form.
To provide the spacing from the centre object, go to the orbital object and press Tab for edit mode there selecting the original and move it along the upward axis allow you to have spacing. Because moving objects in edit mode don't affect the origin. that is still at the 3d cursor
Not only spacing you can do various formations rotating, moving the orbital object in edit mode.
Play with relative offset to get various looks.
The videos skim over setting origin and checking and re-checking transforms and origin points. This is a really important step and even following your detailed instructions it took a few tries to get it right.
This circular array modification is really tricky to get, but once learned it is a great tool for speeding up workflow.
Thanks for this

ARKit: project a feature point found in the ARPointCloud to image space and check to see if it's contained in a CGRect on screen?

So, I am using ARKit to display feature points in the session. I am able to get the current frame, then its rawFeaturePoints and place geometries in the world space so the user can see them on screen. That is working great.
In the app I then have a quadrant on screen. My objective is to show in screen coordinates feature points that projected would fall inside the 2D quadrant on screen. To do that, I tried this:
get feature points as an array of vector_float3
for each of those points I then get a SCNVector3 setting the Z component to 0 (near plane)
I then call on the ARSCNView:
public func projectPoint(_ point: SCNVector3) -> SCNVector3
This approach does give me 2D points back, but, depending on where the camera is they seem to be way off.
So then, since in ARKit the camera keeps moving around, do I need to take that into account to achieve what I explained?
EDIT:
About flipping the Y of the CGPoint retrieved from the projectPoint call on the camera:
/**
Project a 3D point in world coordinate system into 2D viewport space.
#param point 3D point in world coordinate system.
#param orientation Viewport orientation.
#param viewportSize Viewport (or image) size.
#return 2D point in viewport coordinate system with origin at top-left.
*/
open func projectPoint(_ point: vector_float3, orientation: UIInterfaceOrientation, viewportSize: CGSize) -> CGPoint
Remy San mentioned flipping the Y. I tried that and it does seem to work. One difference between what he's doing and what I am doing is that I am not using an SKScene, but I am using SCNScene. Looking at the docs it says:
...The projection of the specified point into a 2D pixel coordinate space
whose origin is in the upper left corner...
So, what throws me off is that if I don't flip the Y it seems like it's not really working properly. (I'll try to post images to show what I mean). But then if flipping the Y though makes things look better, it goes against the docs. No?
I get you are using the intrinsics matrix for you projection. ARkit technology may also give you some extra information. These are the cameraPoseARFrame, the projectionMatrix and the transformToWorldMap matrices. Are you taking them into consideration when transforming from world coordinates to pixel coordinates?
If anyone has a methodology for applying these matrices to the point cloud coordinates to convert them into screen coordinates, could you contribute to my answer please? I think they may provide more precision and accuracy to the final result.
Thank you!

How to find out if the surface detected by ARKit is no more available?

I am working on an application with ARKit and SceneKit frameworks. In my application I have enabled surface detection (I followed the placing objects sample provided by Apple). How to find if the surface detected is no more available? That is, initially only if user has detected the surface in ARSession I am allowing him to place the 3D object.
But if the user moves rapidly or focuses somewhere, the detected surface area is getting lost. In this case if the user tries to place another object I shouldn't allow him to place it until he scans the floor again and get the surface corrected.
Is there any delegate which is available to let us know that the surface detected is no more available?
There are delegate functions that you can use. The delegate is the ARSCNViewDelegate
It has a function that is renderer(_:didRemove:for:) that fires when an ARAnchor has been removed. You can use this function to perform some operation when a surface gets removed.
ARSCNViewDelegate Link
There are two ways to “lose” a surface, so there’s more than one approach to dealing with such a problem.
As noted in the other answer, there’s an ARSCNViewDelegate method that ARKit calls when an anchor is removed from the AR session. However, ARKit doesn’t remove plane anchors during a running session — once it’s detected a plane, it assumes the plane is always there. So that method gets called only if:
You remove the plane anchor directly by passing it to session.remove(anchor:), or
You reset the session by running it again with the .removeExistingAnchors option.
I’m not sure the former is a good idea, but the latter is important to handle, so you probably want your delegate to handle it well.
You can also “lose” a surface by having it pass out of view — for example, ARKit detects a table, and then the user turns around so the camera isn’t pointed at or near the table anymore.
ARKit itself doesn’t offer you any help for dealing with this problem. It gives you all the info you need to do the math yourself, though. You get the plane anchor’s position, orientation, and size, so you can calculate its four corner points. And you get the camera’s projection matrix, so you can check for whether any point is in the viewing frustum.
Since you’re already using SceneKit, though, there are also ways to get SceneKit to do the math for you... Working backwards:
SceneKit gives you an isNode(_:insideFrustumOf:) test, so if you have a SCNNode whose bounding box matches the extent of your plane anchor, you can pass that along with the camera (view.pointOfView) to find out if the node is visible.
To get a node whose bounding box matches a plane anchor, implement the ARSCNViewDelegate didAdd and didUpdate callbacks to create/update an SCNPlane whose position and dimensions match the ARPlaneAnchor’s center and extent. (Don’t forget to flip the plane sideways, since SCNPlane is vertically oriented by default.)
If you don’t want that plane visible in the AR view, set its materials to be transparent.

Converting mouse position to world position OpenGL

Hey, I'm working on a map editor for my game, and I'm trying to convert the mouse position to a position in the game world, the view is set up using gluPerspective
A good place to start would be the function gluUnProject, which takes mouse coordinates and calculates object space coordinates. Take a look at http://nehe.gamedev.net/data/articles/article.asp?article=13 for a basic tutorial.
UPDATE:
You must enable depth buffering for the code in that article to work. The Z value for mouse coordinates is determined based on the value in the depth buffer at that point.
In your initialization code, make sure you do the following:
glEnable(GL_DEPTH);
A point on the screen represents an entire line (an infinite set of points) in 3D space.
Most people with questions similar to yours are really trying to select an object by clicking on it. If that's what you're after, OpenGL offers a selection mode that's generally more effective than trying to convert the screen coordinate into real-world coordinates.
Using selection mode is (usually) pretty simple: you start with gluPickMatrix, which you use to specify a small box around the click point. You then draw your scene in selection mode. When you're done, instead of actually drawing anything, it gives you back records of what would have been drawn in the box you specified. If memory serves, those are arranged in Z order, so the first one in the list is what would have displayed front-most (i.e., the one you usually want).

Resources