How to color a scnplane with 2 different materials? - scenekit

I have a SCNPlane that I created in the SceneKit editor and I want 1 side of the plane to have a certain image and the other side of the plane to have another image. How do I do that in the Scenekit editor
So far what I've tried to do is adding 2 materials to the plane. I tried adding 2 materials and unchecking double-sided but that doesn't work.
Any help would be appreciated!

Per the SCNPlane docs:
The surface is one-sided. Its surface normal vectors point in the positive z-axis direction of its local coordinate space, so it is only visible from that direction by default. To render both sides of a plane, ether set the isDoubleSided property of its material to true or create two plane geometries and orient them back to back.
That implies a plane has only one material — isDoubleSided is a property of a material, letting that one material render on both sides of a surface, but there's nothing you can do to one material to turn it into two.
If you want a flat surface with two materials, you can arrange two planes back to back as the doc suggests. Make them both children of a containing node and you can then use that to move them together. Or you could perhaps make an SCNBox that's very thin in one dimension.

Very easy to do in 2022.
It's very easy and common to do this, you just add the rear as a child.
To be clear the node (and the rear you add) should both use the single-sided shader.
Obviously, the rear you add points in the other direction!
Do note that they are indeed in "exactly the same place". Sometimes folks new to 3D mesh think the two meshes would need to be "a little apart", not so.
public var rear = SCNNode()
private var theRearPlane = SCNPlane()
private func addRear() {
addChildNode(rear)
rear.eulerAngles = SCNVector3(0, CGFloat.pi, 0)
theRearPlane. ... set width, height etc
theRearPlane.firstMaterial?.isDoubleSided = false
rear.geometry = theRearPlane
rear.geometry?.firstMaterial!.diffuse.contents = .. your rear image/etc
}
So ...
///Double-sided sprite
class SCNTwoSidedNode: SCNNode {
public var rear = SCNNode()
private var thePlane = SCNPlane()
override init() {
super.init()
thePlane. .. set size, etc
thePlane.firstMaterial?.isDoubleSided = false
thePlane.firstMaterial?.transparencyMode = .aOne
geometry = thePlane
addRear()
}
Consuming code can just refer to .rear , for example,
playerNode. ... the drawing of the Druid
playerNode.rear. ... Druid rules and abilities text
enemyNode. ... the drawing of the Mage
enemyNode.rear. ... Mage rules and abilities text
If you want to do this in the visual editor - very easy
It's trivial. Simply add the rear as a child. Rotate the child 180 degrees on Y.
It's that easy.
Make them both single-sided and put anything you want on the front and rear.
Simply move the main one (the front) normally and everything works.

Related

How to use extra extra_x_ranges, extra_y_ranges with add_tile in bokeh

I want to use long/lat (EPSG:4326) coordinates in a bokeh plot and have a map in the Background.
I tried with the tile provider maps as suggested in bokeh: Mapping geo data.
But the format is in web mercator coordinates (EPSG:3857) and I don't want to convert my coordinates.
The general question how to do this is unanswered in Is it possible to set figure axis_type in bokeh to geographical (long/lat)?
My idea was to use extra axes:
from bokeh.plotting import figure, show
from bokeh.models import Range1d, LinearAxis
from bokeh.tile_providers import CARTODBPOSITRON, get_provider
tile_provider = get_provider(CARTODBPOSITRON)
p = figure(x_range=(-180, 180), y_range=(-90, 90)) # EPSG:4326
# add extra axis
p.extra_x_ranges = {"EPSG:3857x": Range1d(start=-20026376.39, end=20026376.39)}
p.extra_y_ranges = {"EPSG:3857y": Range1d(start=-20048966.10, end=20048966.10)}
# place extra axis
p.add_layout(LinearAxis(x_range_name="EPSG:3857x"), 'above')
p.add_layout(LinearAxis(y_range_name="EPSG:3857y"), 'right')
p.add_tile(tile_provider, x_range_name="EPSG:3857x", y_range_name="EPSG:3857y")
show(p)
But the map is not visible.
Is there a way to use extra axis for a tile_provider?
If you are just asking about displaying lat/lng visually on the axes, then all you have to do is set the axis type to "mercator"
p = figure(x_range=(-2000000, 6000000), y_range=(-1000000, 7000000),
x_axis_type="mercator", y_axis_type="mercator")
This is demonstrated on the documentation page you linked.
If you are asking about using data that is in lan/lng coordinates to plot on a tile plot, then you will need to convert it to Web Mercator first. The underlying coordinate system for tiles is always Web Mercator.
If you are asking about something else, then your question is not clear (please update to clarify).

What is SceneKit doing between calls to didApplyConstraints and willRenderScene?

The SceneKit rendering loop is well documented here https://developer.apple.com/documentation/scenekit/scnscenerendererdelegate and here https://www.raywenderlich.com/1257-scene-kit-tutorial-with-swift-part-4-render-loop. However neither of these documents explains what SceneKit does between calls to didApplyConstraints and willRenderScene.
I've modified my SCNSceneRendererDelegate to measure the time between each call and I can see that around 5ms elapses between those two calls. It isn't running my code in that time, but presumably some aspect of the way I've set up my scene is creating work which has to be done there. Any insight into what SceneKit is doing would be very helpful.
I am calling SceneKit myself from an MTKView's draw call (rather than using an SCNView) so that I can render the scene twice. The first render is normal, the second uses the depth buffer from the first but draws just a subset of the scene that I want to "glow" onto a separate colour buffer. That colour buffer is then scaled down, gaussian blurred, scaled back up and then blended over the top of the first scene (all with custom Metal shaders).
The 5ms spent between didApplyConstraints and willRenderScene started happening when I introduced this extra rendering pass. To control which nodes are in each scene I switch the opacity of a small number of parent nodes between 0 and 1. If I remove the code which switches opacity but keep everything else (so there are two rendering passes but they both draw everything) the extra 5ms disappears and the overall frame rate is actually faster even though much more rendering is happening.
I'm writing Swift targeting MacOS on a 2018 MacBook Pro.
UPDATE: mnuages has explained that changing the opacity causes SceneKit to rebuild the scene graph and it that explains part of the lost time. However I've now discovered that my use of a custom SCNProgram for the nodes in one rendering pass also triggers a 5ms pause between didApplyConstraints and willRenderScene. Does anyone know why this might be?
Here is my code for setting up the SCNProgram and the SCNMaterial, both done once:
let device = MTLCreateSystemDefaultDevice()!
let library = device.makeDefaultLibrary()
glowProgram = SCNProgram()
glowProgram.library = library
glowProgram.vertexFunctionName = "emissionGlowVertex"
glowProgram.fragmentFunctionName = "emissionGlowFragment"
...
let glowMaterial = SCNMaterial()
glowMaterial.program = glowProgram
let emissionImageProperty = SCNMaterialProperty(contents: emissionImage)
glowMaterial.setValue(emissionImageProperty, forKey: "tex")
Here's where I apply the material to the nodes:
let nodeWithGeometryClone = nodeWithGeometry.clone()
nodeWithGeometryClone.categoryBitMask = 2
let geometry = nodeWithGeometryClone.geometry!
nodeWithGeometryClone.geometry = SCNGeometry(sources: geometry.sources, elements: geometry.elements)
glowNode.addChildNode(nodeWithGeometryClone)
nodeWithGeometryClone.geometry!.firstMaterial = glowMaterial
The glow nodes are a deep clone of the regular nodes, but with an alternative SCNProgram. Here's the Metal code:
#include <metal_stdlib>
using namespace metal;
#include <SceneKit/scn_metal>
struct NodeConstants {
float4x4 modelTransform;
float4x4 modelViewProjectionTransform;
};
struct EmissionGlowVertexIn {
float3 pos [[attribute(SCNVertexSemanticPosition)]];
float2 uv [[attribute(SCNVertexSemanticTexcoord0)]];
};
struct EmissionGlowVertexOut {
float4 pos [[position]];
float2 uv;
};
vertex EmissionGlowVertexOut emissionGlowVertex(EmissionGlowVertexIn in [[stage_in]],
constant NodeConstants &scn_node [[buffer(1)]]) {
EmissionGlowVertexOut out;
out.pos = scn_node.modelViewProjectionTransform * float4(in.pos, 1) + float4(0, 0, -0.01, 0);
out.uv = in.uv;
return out;
}
constexpr sampler linSamp = sampler(coord::normalized, address::clamp_to_zero, filter::linear);
fragment half4 emissionGlowFragment(EmissionGlowVertexOut in [[stage_in]],
texture2d<half, access::sample> tex [[texture(0)]]) {
return tex.sample(linSamp, in.uv);
}
By changing the opacity of nodes you're invalidating parts of the scene graph which can result in additional work for the renderer.
It would be interesting to see if setting the camera's categoryBitMask is more performant (it doesn't modify the scene graph).

ARKit - ARRreferenceImage tracking

I'm playing around with ARReferenceImages in ARKit and I'm trying to add an SCNNode when a reference image is recognised and then leave that node in place regardless of whether the same reference image is then recognised elsewhere.
I can add my SCNode correctly, but if I move my marker it picks it up again and moves my placed node to the position of the marker.
My code to add is as follows:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
print("MAPNODE IS NIL = \(self.mapNode == nil)")
updateQueue.async {
if self.mapNode == nil {
// Create a plane to visualize the initial position of the detected image.
let plane = SCNPlane(width: 1.2912,
height: 1.2912)
let planeNode = SCNNode(geometry: plane)
planeNode.opacity = 1
/*
`SCNPlane` is vertically oriented in its local coordinate space, but
`ARImageAnchor` assumes the image is horizontal in its local space, so
rotate the plane to match.
*/
planeNode.eulerAngles.x = -.pi / 2
self.mapNode = planeNode
/*
Image anchors are not tracked after initial detection, so create an
animation that limits the duration for which the plane visualization appears.
*/
// Add the plane visualization to the scene.
node.addChildNode(planeNode)
}
}
}
reading the docs here https://developer.apple.com/documentation/arkit/recognizing_images_in_an_ar_experience#2958517 it states that
Apply Best Practices
This example app simply visualizes where ARKit detects each reference
image in the user’s environment, but your app can do much more. Follow
the tips below to design AR experiences that use image detection well.
Use detected images to set a frame of reference for the AR scene.
Instead of requiring the user to choose a place for virtual content,
or arbitrarily placing content in the user’s environment, use detected
images to anchor the virtual scene. You can even use multiple detected
images. For example, an app for a retail store could make a virtual
character appear to emerge from a store’s front door by recognizing
posters placed on either side of the door and then calculating a
position for the character directly between the posters.
Note
Use the ARSession setWorldOrigin(relativeTransform:) method to
redefine the world coordinate system so that you can place all anchors
and other content relative to the reference point you choose.
Design your AR experience to use detected images as a starting point
for virtual content. ARKit doesn’t track changes to the position or
orientation of each detected image. If you try to place virtual
content that stays attached to a detected image, that content may not
appear to stay in place correctly. Instead, use detected images as a
frame of reference for starting a dynamic scene. For example, your app
might recognize theater posters for a sci-fi film and then have
virtual spaceships appear to emerge from the posters and fly around
the environment.
So I tried setting my world transform to be equal to the transform of my image anchor
self.session.setWorldOrigin(relativeTransform: imageAnchor.transform)
However my mapNode follows the imageAnchor where ever it moves. I haven't implemented the renderer update method so I'm not sure why this keeps moving.
I'm assuming that the setWorldOrigin method is constantly updating to the imageAnchor.transform and not just that moment in time, which is weird as that code is only called once. Any ideas?
If you want to add the mapNode at the position of the ARImageAnchor you could set the position of your mapNode at the transform of the ARImageAnchor and add it to the scene but not linked to the reference image if that makes sense.
This could be done like so:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
//2. An ImageAnchor Is Only Added Once For Each Identified Target
print("Anchor ID = \(currentImageAnchor.identifier)")
//3. Add An SCNNode At The Position Of The Identified ImageTarget
let nodeHolder = SCNNode()
let nodeGeometry = SCNBox(width: 0.02, height: 0.02, length: 0.02, chamferRadius: 0)
nodeGeometry.firstMaterial?.diffuse.contents = UIColor.cyan
nodeHolder.geometry = nodeGeometry
nodeHolder.position = SCNVector3(currentImageAnchor.transform.columns.3.x,
currentImageAnchor.transform.columns.3.y,
currentImageAnchor.transform.columns.3.z)
augmentedRealityView?.scene.rootNode.addChildNode(nodeHolder)
}
In another part of your question you seem to imply that you want to detect multiple occurrences of the same image. I could be wrong but I think the only way to do this is to remove the corresponding ARImageAnchor for the reference image, which can be done like so (by adding it at the end of the last code snippet):
augmentedRealitySession.remove(anchor: currentImageAnchor)
The issue here is that once the ARImageAnchor is removed, any time it is detected again, you would have to handle whether content should be added, which is tricky since the ARImageAnchor.identifier is always the same for the referenceImage regardless of whether it is removed and then re-added thus making it difficult to store in a dictionary etc. As such depending on your needs you would then need to find a way to determine if content existed at that location and whether to re add it etc.
The last part of your question about setWorldOrigin seems a bit odd like you said, but maybe you could add a Bool to prevent it from potentially changing e.g:
var hasSetWorldOrigin = false
Then based on this you could ensure that it is only set once e.g:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
//2. If We Havent Set The World Origin Set It Based On The ImageAnchorTranform
if !hasSetWorldOrigin{
self.augmentedRealitySession.setWorldOrigin(relativeTransform: currentImageAnchor.transform)
hasSetWorldOrigin = true
//3. Create Two Nodes To Add To The Scene And Distribute Them
let nodeHolderA = SCNNode()
let nodeGeometryA = SCNBox(width: 0.04, height: 0.04, length: 0.04, chamferRadius: 0)
nodeGeometryA.firstMaterial?.diffuse.contents = UIColor.green
nodeHolderA.geometry = nodeGeometryA
let nodeHolderB = SCNNode()
let nodeGeometryB = SCNBox(width: 0.04, height: 0.04, length: 0.04, chamferRadius: 0)
nodeGeometryB.firstMaterial?.diffuse.contents = UIColor.red
nodeHolderB.geometry = nodeGeometryB
if let cameraTransform = augmentedRealitySession.currentFrame?.camera.transform{
nodeHolderA.simdPosition = float3(cameraTransform.columns.3.x,
cameraTransform.columns.3.y,
cameraTransform.columns.3.z)
nodeHolderB.simdPosition = float3(cameraTransform.columns.3.x + 0.2,
cameraTransform.columns.3.y,
cameraTransform.columns.3.z)
}
augmentedRealityView?.scene.rootNode.addChildNode(nodeHolderA)
augmentedRealityView?.scene.rootNode.addChildNode(nodeHolderB)
}
}
Hopefully my answer will provide a useful starting point to you assuming of course I have interpreted your question correctly,

How can I perform a Hit Test on one specifc 3d object in WPF?

I have a SphereMesh (inherits from MeshGeneratorBase as part of the Petzold.Media3D.dll) in my WPF 3D Scene. I also have thousands of ScreenSpaceLines3D objects on that sphere. I want to ignore everything in my scene except the SphereMesh and find out the X-Y-Z coordinate of where my mouse ray intersects with the sphere only. Even if there is another object X between the sphere and the mouse, I still want to know where the mouse would hit the sphere, as if object X didn't exist.
I've tried the below code using HitTest, but as I add thousands/millions of other objects in my scene/world, it becomes extremely slow. And the object obstruction issue is another problem I can't resolve.
What do you recommend?
Current code:
Point mousePos = new Point(x, y);
PointHitTestParameters hitParams = new PointHitTestParameters(mousePos);
VisualTreeHelper.HitTest(
viewPort,null,
delegate(HitTestResult hr)
{
RayMeshGeometry3DHitTestResult rayHit = hr as RayMeshGeometry3DHitTestResult;
if(rayHit != null)
{
// Mouse hits something
Console.WriteLine("Point: " + rayHit.PointHit);
}
return HitTestResultBehavior.Continue;
}, hitParams);
Any help?
Thanks.

WPF 3d ImageBrush material help

Im trying to apply a material to my GeometryModel3D at runtime like so:
var model3D = ShardModelVisual.Content as GeometryModel3D;
var materialGroup = model3D.Material as MaterialGroup;
BitmapImage image;
ResourceLoader.TryLoadImage("pack://application:,,,/AnzSurface;component/path file/img.png", out image, ".png");
var iceBrush = new ImageBrush(image);
var grp = new TransformGroup();
grp.Children.Add(new ScaleTransform(0.25, 0.65, 0.5, 0.5));
grp.Children.Add(new TranslateTransform(0.0, 0.0));
iceBrush.Transform = grp;
var iceMat = new DiffuseMaterial(iceBrush);
materialGroup.Children.Add(iceMat);
Which all works fine, and the material gets added.
What I dont understand is how I can map the users click on the screen to the offsets that need to be applied to the TranslateTransform.
I.e. at the moment, x: -0.25 moves the material backwards along the X axis, but I have NO IDEA how to get that type of a coordinate from the users mouse click...
when I do:
e.MouseDevice.GetPosition(ShardsViewPort3D);
that gives me normal X/Y corrds of the mouse click...
Thanks for any help you can give!
It sounds like you want to slide the material around on your geometry when you click on it. Here's how:
Use hit testing to translate your X/Y coordinates from the mouse click into a RayMeshGeometry3DHitTestResult as described in my earlier answer. This will give you the MeshGeometry3D that was hit, the vertices of the triangle that was hit, and the relative position on that triangle.
Look up each vertex index (VertexIndex1, VertexIndex2, VertexIndex2) in the MeshGeometry3D.TextureCoordinates to get the texture coordinates. This will give you three (u,v) pairs as Point objects. Multiply each the (u,v) pairs by the corresponding weight from the hit test result (VertexWeight1, VertexWeight2, VertexWeight3) and add the pairs together, ie:
uMouse = u1 * VertexWeight1 + u2 * VertexWeight2 + u3 * VertexWeight3
vMouse = v1 * VertexWeight1 + v2 * VertexWeight2 + v3 * VertexWeight3
Now you have a point (uMouse, vMouse) that indicates where on your material your mouse was clicked.
If you want a particular point on your texture to move to exactly where the mouse was clicked, just subtract the (uMouse, vMouse) where the mouse was clicked from the (u,v) coordinate of the location in the material you want to appear under the mouse, and set this as your TranslateTransform. If you want to handle dragging, store the computed (uMouse,vMouse) where the drag started and the transform as of the drag start, then as dragging progresses compute the new transform as:
translate = (uMouse,vMouse) - (uMouseDragStart, vMouseDragStart) + origTranslate
In code you'll write this as Point additions. I spelled it out as (u,v) in this explanation because I thought it was easier to understand if I did so. In actuality the code to compute (uMouse, vMouse) will look more like this:
var uv1 = hit.MeshHit.TextureCoordinates[hit.VertexIndex1];
var uv2 = hit.MeshHit.TextureCoordinates[hit.VertexIndex2];
var uv3 = hit.MeshHit.TextureCoordinates[hit.VertexIndex3];
var uvMouse = new Vector(
uv1.X * hit.VertexWeight1 + uv2.X * hit.VertexWeight2 + uv3.X * hit.VertexWeight3)
uv1.Y * hit.VertexWeight1 + uv2.Y * hit.VertexWeight2 + uv3.Y * hit.VertexWeight3);
and the code to update the transform during a drag will look something like this:
...
var translate = translateAtDragStart + uvMouse - uvMouseAtDragStart;
... = new TranslateTransform(translate.X, translate.Y);
You'll have to adapt this to the exact situation.
Note that your HitTest callback may be called multiple times, starting at the closest mesh and moving back. It may even be called with 2D hits, for example if a 2D object is in front of your Viewport3D. So you'll want to check each hit to see if it is really what you want, for example during dragging you want to keep checking the position on the mesh being dragged even if it is no longer foremost. Return HitTestResultBehavior.Stop from you callback once you have acted on the mesh being dragged.

Resources