How can I move an SCNNode in SceneKit, but affected by gravity? - scenekit

SCNAction.move(to:SCNVector3) doesn't work. It moves the shape but the shape doesn't fall.
There's some other action?

If you haven't already, you'll need to give your SCNNode a physicsBody and configure it prior to adding the node to the scene.
let body = SCNPhysicsBody(type: SCNPhysicsBodyType, shape: SCNPhysicsShape)
myNode.physicsBody = body
Then you can set the .isAffectedByGravity property to true.
myNode.physicsBody?.isAffectedByGravity = true
Now, when you add the node to the scene, it will fall.

Related

What is SceneKit doing between calls to didApplyConstraints and willRenderScene?

The SceneKit rendering loop is well documented here https://developer.apple.com/documentation/scenekit/scnscenerendererdelegate and here https://www.raywenderlich.com/1257-scene-kit-tutorial-with-swift-part-4-render-loop. However neither of these documents explains what SceneKit does between calls to didApplyConstraints and willRenderScene.
I've modified my SCNSceneRendererDelegate to measure the time between each call and I can see that around 5ms elapses between those two calls. It isn't running my code in that time, but presumably some aspect of the way I've set up my scene is creating work which has to be done there. Any insight into what SceneKit is doing would be very helpful.
I am calling SceneKit myself from an MTKView's draw call (rather than using an SCNView) so that I can render the scene twice. The first render is normal, the second uses the depth buffer from the first but draws just a subset of the scene that I want to "glow" onto a separate colour buffer. That colour buffer is then scaled down, gaussian blurred, scaled back up and then blended over the top of the first scene (all with custom Metal shaders).
The 5ms spent between didApplyConstraints and willRenderScene started happening when I introduced this extra rendering pass. To control which nodes are in each scene I switch the opacity of a small number of parent nodes between 0 and 1. If I remove the code which switches opacity but keep everything else (so there are two rendering passes but they both draw everything) the extra 5ms disappears and the overall frame rate is actually faster even though much more rendering is happening.
I'm writing Swift targeting MacOS on a 2018 MacBook Pro.
UPDATE: mnuages has explained that changing the opacity causes SceneKit to rebuild the scene graph and it that explains part of the lost time. However I've now discovered that my use of a custom SCNProgram for the nodes in one rendering pass also triggers a 5ms pause between didApplyConstraints and willRenderScene. Does anyone know why this might be?
Here is my code for setting up the SCNProgram and the SCNMaterial, both done once:
let device = MTLCreateSystemDefaultDevice()!
let library = device.makeDefaultLibrary()
glowProgram = SCNProgram()
glowProgram.library = library
glowProgram.vertexFunctionName = "emissionGlowVertex"
glowProgram.fragmentFunctionName = "emissionGlowFragment"
...
let glowMaterial = SCNMaterial()
glowMaterial.program = glowProgram
let emissionImageProperty = SCNMaterialProperty(contents: emissionImage)
glowMaterial.setValue(emissionImageProperty, forKey: "tex")
Here's where I apply the material to the nodes:
let nodeWithGeometryClone = nodeWithGeometry.clone()
nodeWithGeometryClone.categoryBitMask = 2
let geometry = nodeWithGeometryClone.geometry!
nodeWithGeometryClone.geometry = SCNGeometry(sources: geometry.sources, elements: geometry.elements)
glowNode.addChildNode(nodeWithGeometryClone)
nodeWithGeometryClone.geometry!.firstMaterial = glowMaterial
The glow nodes are a deep clone of the regular nodes, but with an alternative SCNProgram. Here's the Metal code:
#include <metal_stdlib>
using namespace metal;
#include <SceneKit/scn_metal>
struct NodeConstants {
float4x4 modelTransform;
float4x4 modelViewProjectionTransform;
};
struct EmissionGlowVertexIn {
float3 pos [[attribute(SCNVertexSemanticPosition)]];
float2 uv [[attribute(SCNVertexSemanticTexcoord0)]];
};
struct EmissionGlowVertexOut {
float4 pos [[position]];
float2 uv;
};
vertex EmissionGlowVertexOut emissionGlowVertex(EmissionGlowVertexIn in [[stage_in]],
constant NodeConstants &scn_node [[buffer(1)]]) {
EmissionGlowVertexOut out;
out.pos = scn_node.modelViewProjectionTransform * float4(in.pos, 1) + float4(0, 0, -0.01, 0);
out.uv = in.uv;
return out;
}
constexpr sampler linSamp = sampler(coord::normalized, address::clamp_to_zero, filter::linear);
fragment half4 emissionGlowFragment(EmissionGlowVertexOut in [[stage_in]],
texture2d<half, access::sample> tex [[texture(0)]]) {
return tex.sample(linSamp, in.uv);
}
By changing the opacity of nodes you're invalidating parts of the scene graph which can result in additional work for the renderer.
It would be interesting to see if setting the camera's categoryBitMask is more performant (it doesn't modify the scene graph).

ARKit - ARRreferenceImage tracking

I'm playing around with ARReferenceImages in ARKit and I'm trying to add an SCNNode when a reference image is recognised and then leave that node in place regardless of whether the same reference image is then recognised elsewhere.
I can add my SCNode correctly, but if I move my marker it picks it up again and moves my placed node to the position of the marker.
My code to add is as follows:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
print("MAPNODE IS NIL = \(self.mapNode == nil)")
updateQueue.async {
if self.mapNode == nil {
// Create a plane to visualize the initial position of the detected image.
let plane = SCNPlane(width: 1.2912,
height: 1.2912)
let planeNode = SCNNode(geometry: plane)
planeNode.opacity = 1
/*
`SCNPlane` is vertically oriented in its local coordinate space, but
`ARImageAnchor` assumes the image is horizontal in its local space, so
rotate the plane to match.
*/
planeNode.eulerAngles.x = -.pi / 2
self.mapNode = planeNode
/*
Image anchors are not tracked after initial detection, so create an
animation that limits the duration for which the plane visualization appears.
*/
// Add the plane visualization to the scene.
node.addChildNode(planeNode)
}
}
}
reading the docs here https://developer.apple.com/documentation/arkit/recognizing_images_in_an_ar_experience#2958517 it states that
Apply Best Practices
This example app simply visualizes where ARKit detects each reference
image in the user’s environment, but your app can do much more. Follow
the tips below to design AR experiences that use image detection well.
Use detected images to set a frame of reference for the AR scene.
Instead of requiring the user to choose a place for virtual content,
or arbitrarily placing content in the user’s environment, use detected
images to anchor the virtual scene. You can even use multiple detected
images. For example, an app for a retail store could make a virtual
character appear to emerge from a store’s front door by recognizing
posters placed on either side of the door and then calculating a
position for the character directly between the posters.
Note
Use the ARSession setWorldOrigin(relativeTransform:) method to
redefine the world coordinate system so that you can place all anchors
and other content relative to the reference point you choose.
Design your AR experience to use detected images as a starting point
for virtual content. ARKit doesn’t track changes to the position or
orientation of each detected image. If you try to place virtual
content that stays attached to a detected image, that content may not
appear to stay in place correctly. Instead, use detected images as a
frame of reference for starting a dynamic scene. For example, your app
might recognize theater posters for a sci-fi film and then have
virtual spaceships appear to emerge from the posters and fly around
the environment.
So I tried setting my world transform to be equal to the transform of my image anchor
self.session.setWorldOrigin(relativeTransform: imageAnchor.transform)
However my mapNode follows the imageAnchor where ever it moves. I haven't implemented the renderer update method so I'm not sure why this keeps moving.
I'm assuming that the setWorldOrigin method is constantly updating to the imageAnchor.transform and not just that moment in time, which is weird as that code is only called once. Any ideas?
If you want to add the mapNode at the position of the ARImageAnchor you could set the position of your mapNode at the transform of the ARImageAnchor and add it to the scene but not linked to the reference image if that makes sense.
This could be done like so:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
//2. An ImageAnchor Is Only Added Once For Each Identified Target
print("Anchor ID = \(currentImageAnchor.identifier)")
//3. Add An SCNNode At The Position Of The Identified ImageTarget
let nodeHolder = SCNNode()
let nodeGeometry = SCNBox(width: 0.02, height: 0.02, length: 0.02, chamferRadius: 0)
nodeGeometry.firstMaterial?.diffuse.contents = UIColor.cyan
nodeHolder.geometry = nodeGeometry
nodeHolder.position = SCNVector3(currentImageAnchor.transform.columns.3.x,
currentImageAnchor.transform.columns.3.y,
currentImageAnchor.transform.columns.3.z)
augmentedRealityView?.scene.rootNode.addChildNode(nodeHolder)
}
In another part of your question you seem to imply that you want to detect multiple occurrences of the same image. I could be wrong but I think the only way to do this is to remove the corresponding ARImageAnchor for the reference image, which can be done like so (by adding it at the end of the last code snippet):
augmentedRealitySession.remove(anchor: currentImageAnchor)
The issue here is that once the ARImageAnchor is removed, any time it is detected again, you would have to handle whether content should be added, which is tricky since the ARImageAnchor.identifier is always the same for the referenceImage regardless of whether it is removed and then re-added thus making it difficult to store in a dictionary etc. As such depending on your needs you would then need to find a way to determine if content existed at that location and whether to re add it etc.
The last part of your question about setWorldOrigin seems a bit odd like you said, but maybe you could add a Bool to prevent it from potentially changing e.g:
var hasSetWorldOrigin = false
Then based on this you could ensure that it is only set once e.g:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
//2. If We Havent Set The World Origin Set It Based On The ImageAnchorTranform
if !hasSetWorldOrigin{
self.augmentedRealitySession.setWorldOrigin(relativeTransform: currentImageAnchor.transform)
hasSetWorldOrigin = true
//3. Create Two Nodes To Add To The Scene And Distribute Them
let nodeHolderA = SCNNode()
let nodeGeometryA = SCNBox(width: 0.04, height: 0.04, length: 0.04, chamferRadius: 0)
nodeGeometryA.firstMaterial?.diffuse.contents = UIColor.green
nodeHolderA.geometry = nodeGeometryA
let nodeHolderB = SCNNode()
let nodeGeometryB = SCNBox(width: 0.04, height: 0.04, length: 0.04, chamferRadius: 0)
nodeGeometryB.firstMaterial?.diffuse.contents = UIColor.red
nodeHolderB.geometry = nodeGeometryB
if let cameraTransform = augmentedRealitySession.currentFrame?.camera.transform{
nodeHolderA.simdPosition = float3(cameraTransform.columns.3.x,
cameraTransform.columns.3.y,
cameraTransform.columns.3.z)
nodeHolderB.simdPosition = float3(cameraTransform.columns.3.x + 0.2,
cameraTransform.columns.3.y,
cameraTransform.columns.3.z)
}
augmentedRealityView?.scene.rootNode.addChildNode(nodeHolderA)
augmentedRealityView?.scene.rootNode.addChildNode(nodeHolderB)
}
}
Hopefully my answer will provide a useful starting point to you assuming of course I have interpreted your question correctly,

How to color a scnplane with 2 different materials?

I have a SCNPlane that I created in the SceneKit editor and I want 1 side of the plane to have a certain image and the other side of the plane to have another image. How do I do that in the Scenekit editor
So far what I've tried to do is adding 2 materials to the plane. I tried adding 2 materials and unchecking double-sided but that doesn't work.
Any help would be appreciated!
Per the SCNPlane docs:
The surface is one-sided. Its surface normal vectors point in the positive z-axis direction of its local coordinate space, so it is only visible from that direction by default. To render both sides of a plane, ether set the isDoubleSided property of its material to true or create two plane geometries and orient them back to back.
That implies a plane has only one material — isDoubleSided is a property of a material, letting that one material render on both sides of a surface, but there's nothing you can do to one material to turn it into two.
If you want a flat surface with two materials, you can arrange two planes back to back as the doc suggests. Make them both children of a containing node and you can then use that to move them together. Or you could perhaps make an SCNBox that's very thin in one dimension.
Very easy to do in 2022.
It's very easy and common to do this, you just add the rear as a child.
To be clear the node (and the rear you add) should both use the single-sided shader.
Obviously, the rear you add points in the other direction!
Do note that they are indeed in "exactly the same place". Sometimes folks new to 3D mesh think the two meshes would need to be "a little apart", not so.
public var rear = SCNNode()
private var theRearPlane = SCNPlane()
private func addRear() {
addChildNode(rear)
rear.eulerAngles = SCNVector3(0, CGFloat.pi, 0)
theRearPlane. ... set width, height etc
theRearPlane.firstMaterial?.isDoubleSided = false
rear.geometry = theRearPlane
rear.geometry?.firstMaterial!.diffuse.contents = .. your rear image/etc
}
So ...
///Double-sided sprite
class SCNTwoSidedNode: SCNNode {
public var rear = SCNNode()
private var thePlane = SCNPlane()
override init() {
super.init()
thePlane. .. set size, etc
thePlane.firstMaterial?.isDoubleSided = false
thePlane.firstMaterial?.transparencyMode = .aOne
geometry = thePlane
addRear()
}
Consuming code can just refer to .rear , for example,
playerNode. ... the drawing of the Druid
playerNode.rear. ... Druid rules and abilities text
enemyNode. ... the drawing of the Mage
enemyNode.rear. ... Mage rules and abilities text
If you want to do this in the visual editor - very easy
It's trivial. Simply add the rear as a child. Rotate the child 180 degrees on Y.
It's that easy.
Make them both single-sided and put anything you want on the front and rear.
Simply move the main one (the front) normally and everything works.

Scenekit repeat texture added through SCNShadable

I've added a uniform sampler2D uMySampler; through SCNShadable. I believe i'm not seeing the texture because it's not set to be repeat wrapping.
The sample code that i've found does it this way programatically:
myMat?.diffuse.wrapS = SCNWrapMode.repeat
myMat?.diffuse.wrapT = SCNWrapMode.repeat
But how do i set the wrapS on uMySampler?
As a fallback i think i could get away by doing fract(myTexCoord) but that might mess up mipmapping?
let myTexture = SCNMaterialProperty( contents: UIImage(named: "art.scnassets/myTexture.png") );
myTexture.wrapS = SCNWrapMode.repeat
This is the trick, not sure if i find this very intuitive.

Subclassing SCNNode

I am importing a simple dae file. I want some of the nodes to be a subclass of SCNNode - MySCNNode.
MySCNNode *node = [scnView.scene.rootNode childNodeWithName:#"Box1" recursively:YES];
//additional initialization goes here
Tried casting to (MySCNNode *) too.
But this is not working. "node" is still an SCNNode. Why?
I need to add a few properties and methods to SCNNode. So I subclassed SCNNode. I want the nodes from the scene(imported from a dae) to have the properties and behaviour. The nodes from the scene is always SCNNode. I want it to be of class MySCNNode.
I understand needing a subclass. And I understand why it's atypical. In my case I'm making a RTS and am creating it's "Mission editor" so I can take 1 scene filled with the various objects created in blender and build custom scenes in the editor. So I need to know when tiles are buildable, passable (and on which level), etc.
This may not be perfect but it should work:
+(instancetype)mySCNNodeWithNode:(SCNNode*)node{
SCNVector3 min,max;
[node getBoundingBoxMin:&min max:&max];
MySCNNode *newNode = [MySCNNode node];
newNode.position = node.position;
newNode.rotation = node.rotation;
newNode.transform = node.transform;
[node setBoundingBoxMin:&min max:&max];
newNode.geometry = [node.geometry copy];
SCNMaterial * material = [node.geometry.firstMaterial copy];
newNode.geometry.firstMaterial = material;
return newNode;
}

Resources