ARKit - ARRreferenceImage tracking - ios11

I'm playing around with ARReferenceImages in ARKit and I'm trying to add an SCNNode when a reference image is recognised and then leave that node in place regardless of whether the same reference image is then recognised elsewhere.
I can add my SCNode correctly, but if I move my marker it picks it up again and moves my placed node to the position of the marker.
My code to add is as follows:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
print("MAPNODE IS NIL = \(self.mapNode == nil)")
updateQueue.async {
if self.mapNode == nil {
// Create a plane to visualize the initial position of the detected image.
let plane = SCNPlane(width: 1.2912,
height: 1.2912)
let planeNode = SCNNode(geometry: plane)
planeNode.opacity = 1
/*
`SCNPlane` is vertically oriented in its local coordinate space, but
`ARImageAnchor` assumes the image is horizontal in its local space, so
rotate the plane to match.
*/
planeNode.eulerAngles.x = -.pi / 2
self.mapNode = planeNode
/*
Image anchors are not tracked after initial detection, so create an
animation that limits the duration for which the plane visualization appears.
*/
// Add the plane visualization to the scene.
node.addChildNode(planeNode)
}
}
}
reading the docs here https://developer.apple.com/documentation/arkit/recognizing_images_in_an_ar_experience#2958517 it states that
Apply Best Practices
This example app simply visualizes where ARKit detects each reference
image in the user’s environment, but your app can do much more. Follow
the tips below to design AR experiences that use image detection well.
Use detected images to set a frame of reference for the AR scene.
Instead of requiring the user to choose a place for virtual content,
or arbitrarily placing content in the user’s environment, use detected
images to anchor the virtual scene. You can even use multiple detected
images. For example, an app for a retail store could make a virtual
character appear to emerge from a store’s front door by recognizing
posters placed on either side of the door and then calculating a
position for the character directly between the posters.
Note
Use the ARSession setWorldOrigin(relativeTransform:) method to
redefine the world coordinate system so that you can place all anchors
and other content relative to the reference point you choose.
Design your AR experience to use detected images as a starting point
for virtual content. ARKit doesn’t track changes to the position or
orientation of each detected image. If you try to place virtual
content that stays attached to a detected image, that content may not
appear to stay in place correctly. Instead, use detected images as a
frame of reference for starting a dynamic scene. For example, your app
might recognize theater posters for a sci-fi film and then have
virtual spaceships appear to emerge from the posters and fly around
the environment.
So I tried setting my world transform to be equal to the transform of my image anchor
self.session.setWorldOrigin(relativeTransform: imageAnchor.transform)
However my mapNode follows the imageAnchor where ever it moves. I haven't implemented the renderer update method so I'm not sure why this keeps moving.
I'm assuming that the setWorldOrigin method is constantly updating to the imageAnchor.transform and not just that moment in time, which is weird as that code is only called once. Any ideas?

If you want to add the mapNode at the position of the ARImageAnchor you could set the position of your mapNode at the transform of the ARImageAnchor and add it to the scene but not linked to the reference image if that makes sense.
This could be done like so:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
//2. An ImageAnchor Is Only Added Once For Each Identified Target
print("Anchor ID = \(currentImageAnchor.identifier)")
//3. Add An SCNNode At The Position Of The Identified ImageTarget
let nodeHolder = SCNNode()
let nodeGeometry = SCNBox(width: 0.02, height: 0.02, length: 0.02, chamferRadius: 0)
nodeGeometry.firstMaterial?.diffuse.contents = UIColor.cyan
nodeHolder.geometry = nodeGeometry
nodeHolder.position = SCNVector3(currentImageAnchor.transform.columns.3.x,
currentImageAnchor.transform.columns.3.y,
currentImageAnchor.transform.columns.3.z)
augmentedRealityView?.scene.rootNode.addChildNode(nodeHolder)
}
In another part of your question you seem to imply that you want to detect multiple occurrences of the same image. I could be wrong but I think the only way to do this is to remove the corresponding ARImageAnchor for the reference image, which can be done like so (by adding it at the end of the last code snippet):
augmentedRealitySession.remove(anchor: currentImageAnchor)
The issue here is that once the ARImageAnchor is removed, any time it is detected again, you would have to handle whether content should be added, which is tricky since the ARImageAnchor.identifier is always the same for the referenceImage regardless of whether it is removed and then re-added thus making it difficult to store in a dictionary etc. As such depending on your needs you would then need to find a way to determine if content existed at that location and whether to re add it etc.
The last part of your question about setWorldOrigin seems a bit odd like you said, but maybe you could add a Bool to prevent it from potentially changing e.g:
var hasSetWorldOrigin = false
Then based on this you could ensure that it is only set once e.g:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
//2. If We Havent Set The World Origin Set It Based On The ImageAnchorTranform
if !hasSetWorldOrigin{
self.augmentedRealitySession.setWorldOrigin(relativeTransform: currentImageAnchor.transform)
hasSetWorldOrigin = true
//3. Create Two Nodes To Add To The Scene And Distribute Them
let nodeHolderA = SCNNode()
let nodeGeometryA = SCNBox(width: 0.04, height: 0.04, length: 0.04, chamferRadius: 0)
nodeGeometryA.firstMaterial?.diffuse.contents = UIColor.green
nodeHolderA.geometry = nodeGeometryA
let nodeHolderB = SCNNode()
let nodeGeometryB = SCNBox(width: 0.04, height: 0.04, length: 0.04, chamferRadius: 0)
nodeGeometryB.firstMaterial?.diffuse.contents = UIColor.red
nodeHolderB.geometry = nodeGeometryB
if let cameraTransform = augmentedRealitySession.currentFrame?.camera.transform{
nodeHolderA.simdPosition = float3(cameraTransform.columns.3.x,
cameraTransform.columns.3.y,
cameraTransform.columns.3.z)
nodeHolderB.simdPosition = float3(cameraTransform.columns.3.x + 0.2,
cameraTransform.columns.3.y,
cameraTransform.columns.3.z)
}
augmentedRealityView?.scene.rootNode.addChildNode(nodeHolderA)
augmentedRealityView?.scene.rootNode.addChildNode(nodeHolderB)
}
}
Hopefully my answer will provide a useful starting point to you assuming of course I have interpreted your question correctly,

Related

How to get SceneKit node placed over detected image to stay put as camera moves?

I'm very new to ARKit, and have built a small app based on Apple's "Detecting Images in an AR Experience" sample app. The sample app places a plane over the detected image; my app places an SCNBox at the center of it instead.
If the detected image is displayed on my monitor, I can move the phone all around and the box will stay fixed at the center of the image, which is what I want. If, however, the image is displayed on another iPhone, the box will move around as I move the phone running my app. Possibly due to the smaller screen size?
Here is my implementation of the didAdd delegate method:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
updateQueue.async {
let cube = self.createCubeNode() // returns an SCNNode with SCNBox geometry
let position = SCNVector3(x: imageAnchor.transform.columns.3.x,
y: imageAnchor.transform.columns.3.y,
z: imageAnchor.transform.columns.3.z)
cube.worldPosition = position
self.sceneView.scene.rootNode.addChildNode(cube)
}
}
I found the answer. Instead of positioning and adding my cube node in didAdd, returning the node from nodeFor and letting the system place it completely fixes the problem. So I have removed the above code and replaced it with:
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
// make sure this is an image anchor, otherwise bail out
guard let _ = anchor as? ARImageAnchor else { return nil }
return self.createCubeNode() // returns an SCNNode with SCNBox geometry
}
I got this idea from a Hacking With Swift article: https://www.hackingwithswift.com/example-code/arkit/how-to-detect-images-using-arimagetrackingconfiguration

ARSCNPlaneGeometry update and re-calculate texture coordinates, instead of stretching them

I'm having a problem on the texture coordinates of planes geometries being updated by ARKit. Texture images get stretched, I want to avoid that.
Right now I'm detecting horizontal and vertical walls and applying a texture to them. It's working like a charm...
But when the geometry gets updated because it extents the detection of the wall/floor, the texture coordinates get stretched instead of re-mapping, causing the texture to look stretched like image below.
You can also see an un-edited video of the problem happening: https://www.youtube.com/watch?v=wfwYPwzND74
This is the piece of code where the geometry gets updated:
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let planeAnchor = anchor as? ARPlaneAnchor else {
return
}
let planeGeometry = ARSCNPlaneGeometry(device: device)!
planeGeometry.update(from: planeAnchor.geometry)
//I suppose I need to do some texture re-mapping here.
planeGeometry.materials = node.geometry!.materials
node.geometry = planeGeometry
}
I have seen that you can define the texture coordinates by defining it as a source like this:
let textCords = [] //Array of coordinates
let uvData = Data(bytes: textCords, count: textCords.count * MemoryLayout<vector_float2>.size)
let textureSource = SCNGeometrySource(data: uvData,
semantic: .texcoord,
vectorCount: textCords.count,
usesFloatComponents: true,
componentsPerVector: 2,
bytesPerComponent: MemoryLayout<Float>.size,
dataOffset: 0,
dataStride: MemoryLayout<vector_float2>.size)
But I have no idea how to fill the textCords array to make it fit correctly accordingly to the updated planeGeometry
Edit:
Re-defining the approach:
Thinking deeply on the problem, I came with the idea that I need to modify the texture's transform to fix the stretching part, but then I have two options if I do that:
Either keep the texture big enough to fill the entire geometry but
keeping a ratio of 1:1 to avoid stretching
Or keep the texture the
original size but with 1:1 aspect ratio and repeat the texture
multiple times to fit the entire geometry.
Any of these approaches I'm still lost of how to do it. What would you suggest?

How to color a scnplane with 2 different materials?

I have a SCNPlane that I created in the SceneKit editor and I want 1 side of the plane to have a certain image and the other side of the plane to have another image. How do I do that in the Scenekit editor
So far what I've tried to do is adding 2 materials to the plane. I tried adding 2 materials and unchecking double-sided but that doesn't work.
Any help would be appreciated!
Per the SCNPlane docs:
The surface is one-sided. Its surface normal vectors point in the positive z-axis direction of its local coordinate space, so it is only visible from that direction by default. To render both sides of a plane, ether set the isDoubleSided property of its material to true or create two plane geometries and orient them back to back.
That implies a plane has only one material — isDoubleSided is a property of a material, letting that one material render on both sides of a surface, but there's nothing you can do to one material to turn it into two.
If you want a flat surface with two materials, you can arrange two planes back to back as the doc suggests. Make them both children of a containing node and you can then use that to move them together. Or you could perhaps make an SCNBox that's very thin in one dimension.
Very easy to do in 2022.
It's very easy and common to do this, you just add the rear as a child.
To be clear the node (and the rear you add) should both use the single-sided shader.
Obviously, the rear you add points in the other direction!
Do note that they are indeed in "exactly the same place". Sometimes folks new to 3D mesh think the two meshes would need to be "a little apart", not so.
public var rear = SCNNode()
private var theRearPlane = SCNPlane()
private func addRear() {
addChildNode(rear)
rear.eulerAngles = SCNVector3(0, CGFloat.pi, 0)
theRearPlane. ... set width, height etc
theRearPlane.firstMaterial?.isDoubleSided = false
rear.geometry = theRearPlane
rear.geometry?.firstMaterial!.diffuse.contents = .. your rear image/etc
}
So ...
///Double-sided sprite
class SCNTwoSidedNode: SCNNode {
public var rear = SCNNode()
private var thePlane = SCNPlane()
override init() {
super.init()
thePlane. .. set size, etc
thePlane.firstMaterial?.isDoubleSided = false
thePlane.firstMaterial?.transparencyMode = .aOne
geometry = thePlane
addRear()
}
Consuming code can just refer to .rear , for example,
playerNode. ... the drawing of the Druid
playerNode.rear. ... Druid rules and abilities text
enemyNode. ... the drawing of the Mage
enemyNode.rear. ... Mage rules and abilities text
If you want to do this in the visual editor - very easy
It's trivial. Simply add the rear as a child. Rotate the child 180 degrees on Y.
It's that easy.
Make them both single-sided and put anything you want on the front and rear.
Simply move the main one (the front) normally and everything works.

image array creating unwanted image in top corner

I have this set up in my main.lua file
images = {
display.newImageRect("images/player3.png", 45,35),
display.newImageRect("images/player4.png", 45,35),
display.newImageRect("images/player5.png", 45,35)
}
and call it from my game.lua file with this code:
bird = images[math.random(1,3)]
bird.x, bird.y = -80, 140
bird.name = "bird"
physics.addBody( bird, "static")
bird.isFixedRotation = true
birdIntro = transition.to(bird,{time=1000, x=100, onComplete= birdReady})
A random image spawns (where it should, in the middle) but the issue is that a second image spawns and sits in the top left corner of the screen (slightly off screen). I can't seem to remove it and keep the one correct image only, any solutions?
Your help is greatly appreciated.
Thanks
I think the problem is your images array. I've found by calling the display.newImageRect it will create the object and plop it on the screen. Even if you don't want it to show the object right then. Usually I see this when I should be putting it in a display group (not sure if you are doing that).
I think the solution might be to have the array of images be the image paths and then you can set bird equal to a new image rect at that point, thus only creating 1 image rectangle instead of 3.
images = {
"images/player3.png",
"images/player4.png",
"images/player5.png",
}
bird = display.newImageRect(images[math.random(1,3)],45,35)
Let me know if that works.

initialLayoutAttributesForAppearingItemAtIndexPath fired for all visible cells, not just inserted cells

Has anyone seen a decent answer to this problem?
initialLayoutAttributesForAppearingItemAtIndexPath seems to be being called for all visible cells, not just the cell being inserted. According to Apple's own docs:
For moved items, the collection view uses the standard methods to retrieve the item’s updated layout attributes. For items being inserted or deleted, the collection view calls some different methods, which you should override to provide the appropriate layout information
Which doesn't sound like what is happening... the other cells aren't being inserted, they are being moved, but it's calling initialLayoutAttributesForAppearingItemAtIndexPath for the ones being moved too.
I have seen work arounds using prepareForCollectionViewUpdates: to trace which indexPaths are being updated and only changing those, but this seems a bit odd that it's going agains their own docs. Has anyone else found a better way around this?
I found this blog post by Mark Pospesel to be helpful.
The author also fixed WWDC CircleLayout sample and posted it on Github.
Methods of interest:
- (void)prepareForCollectionViewUpdates:(NSArray *)updateItems
{
// Keep track of insert and delete index paths
[super prepareForCollectionViewUpdates:updateItems];
self.deleteIndexPaths = [NSMutableArray array];
self.insertIndexPaths = [NSMutableArray array];
for (UICollectionViewUpdateItem *update in updateItems)
{
if (update.updateAction == UICollectionUpdateActionDelete)
{
[self.deleteIndexPaths addObject:update.indexPathBeforeUpdate];
}
else if (update.updateAction == UICollectionUpdateActionInsert)
{
[self.insertIndexPaths addObject:update.indexPathAfterUpdate];
}
}
}
- (void)finalizeCollectionViewUpdates
{
[super finalizeCollectionViewUpdates];
// release the insert and delete index paths
self.deleteIndexPaths = nil;
self.insertIndexPaths = nil;
}
// Note: name of method changed
// Also this gets called for all visible cells (not just the inserted ones) and
// even gets called when deleting cells!
- (UICollectionViewLayoutAttributes *)initialLayoutAttributesForAppearingItemAtIndexPath:(NSIndexPath *)itemIndexPath
{
// Must call super
UICollectionViewLayoutAttributes *attributes = [super initialLayoutAttributesForAppearingItemAtIndexPath:itemIndexPath];
if ([self.insertIndexPaths containsObject:itemIndexPath])
{
// only change attributes on inserted cells
if (!attributes)
attributes = [self layoutAttributesForItemAtIndexPath:itemIndexPath];
// Configure attributes ...
attributes.alpha = 0.0;
attributes.center = CGPointMake(_center.x, _center.y);
}
return attributes;
}
// Note: name of method changed
// Also this gets called for all visible cells (not just the deleted ones) and
// even gets called when inserting cells!
- (UICollectionViewLayoutAttributes *)finalLayoutAttributesForDisappearingItemAtIndexPath:(NSIndexPath *)itemIndexPath
{
// So far, calling super hasn't been strictly necessary here, but leaving it in
// for good measure
UICollectionViewLayoutAttributes *attributes = [super finalLayoutAttributesForDisappearingItemAtIndexPath:itemIndexPath];
if ([self.deleteIndexPaths containsObject:itemIndexPath])
{
// only change attributes on deleted cells
if (!attributes)
attributes = [self layoutAttributesForItemAtIndexPath:itemIndexPath];
// Configure attributes ...
attributes.alpha = 0.0;
attributes.center = CGPointMake(_center.x, _center.y);
attributes.transform3D = CATransform3DMakeScale(0.1, 0.1, 1.0);
}
return attributes;
}
You're not alone. The UICollectionViewLayout header file comments make things a little clearer.
For each element on screen before the invalidation,
finalLayoutAttributesForDisappearingXXX will be called and an
animation setup from what is on screen to those final attributes.
For each element on screen after the invalidation,
initialLayoutAttributesForAppearingXXX will be called an an animation
setup from those initial attributes to what ends up on screen.
Basically finalLayoutAttributesForDisappearingItemAtIndexPath is called for each item on screen before the animation block starts, and initialLayoutAttributesForAppearingItemAtIndexPath is called for each item after the animation block ends. It's up to you to cache the array of UICollectionViewUpdateItem objects sent in prepareForCollectionViewUpdates so you know how to setup the initial and final attributes. In my case I cached the previous layout rectangles in prepareLayout so I knew the correct initial positions to use.
One thing that stumped me for a while is you should use super's implementation of initialLayoutAttributesForAppearingItemAtIndexPath and modify the attributes it returns. I was just calling layoutAttributesForItemAtIndexPath in my implementation, and animations weren't working because the layout positions were different.
If you've subclassed UICollectionViewFlowLayout, you can call the super implementation. Once you've got the default initial layout, you can check for an .alpha of 0. If alpha is anything other than 0, the cell is being moved, if it's 0 it's being inserted.
Bit of a hack, I know, but it works 👍.
Swift 2.0 implementation follows:
override func initialLayoutAttributesForAppearingItemAtIndexPath(itemIndexPath: NSIndexPath) -> UICollectionViewLayoutAttributes? {
guard let attributes = super.initialLayoutAttributesForAppearingItemAtIndexPath(itemIndexPath) where attributes.alpha == 0 else {
return nil
}
// modify attributes for insertion here
return attributes
}
Make sure you're using new method signature in Swift 3. Autocorrection doesn't work for this method:
func initialLayoutAttributesForAppearingItem(at itemIndexPath: IndexPath) -> UICollectionViewLayoutAttributes?

Resources