ARSCNPlaneGeometry update and re-calculate texture coordinates, instead of stretching them - scenekit

I'm having a problem on the texture coordinates of planes geometries being updated by ARKit. Texture images get stretched, I want to avoid that.
Right now I'm detecting horizontal and vertical walls and applying a texture to them. It's working like a charm...
But when the geometry gets updated because it extents the detection of the wall/floor, the texture coordinates get stretched instead of re-mapping, causing the texture to look stretched like image below.
You can also see an un-edited video of the problem happening: https://www.youtube.com/watch?v=wfwYPwzND74
This is the piece of code where the geometry gets updated:
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let planeAnchor = anchor as? ARPlaneAnchor else {
return
}
let planeGeometry = ARSCNPlaneGeometry(device: device)!
planeGeometry.update(from: planeAnchor.geometry)
//I suppose I need to do some texture re-mapping here.
planeGeometry.materials = node.geometry!.materials
node.geometry = planeGeometry
}
I have seen that you can define the texture coordinates by defining it as a source like this:
let textCords = [] //Array of coordinates
let uvData = Data(bytes: textCords, count: textCords.count * MemoryLayout<vector_float2>.size)
let textureSource = SCNGeometrySource(data: uvData,
semantic: .texcoord,
vectorCount: textCords.count,
usesFloatComponents: true,
componentsPerVector: 2,
bytesPerComponent: MemoryLayout<Float>.size,
dataOffset: 0,
dataStride: MemoryLayout<vector_float2>.size)
But I have no idea how to fill the textCords array to make it fit correctly accordingly to the updated planeGeometry
Edit:
Re-defining the approach:
Thinking deeply on the problem, I came with the idea that I need to modify the texture's transform to fix the stretching part, but then I have two options if I do that:
Either keep the texture big enough to fill the entire geometry but
keeping a ratio of 1:1 to avoid stretching
Or keep the texture the
original size but with 1:1 aspect ratio and repeat the texture
multiple times to fit the entire geometry.
Any of these approaches I'm still lost of how to do it. What would you suggest?

Related

Turn Plotly Surface Plot into a 2D plot on the y plane

I have a dataset that consists of an (x,y) pair and v which describes a value at (x,y). The data needs to produce a figure that looks like:
This was created by using a surface plot, changing the eye and up values, and then turning the aspectratio on the z-axis to 0.01:
layout= {{
...
aspectmode: "manual",
aspectratio: {x: "3", y: "1", z: ".01"},
scene: {
...
zaxis: {
visible: false
}
}
}}
Notice that the x/y axes are still raised and awkwardly placed. I have two parts to my question:
Is there a better graph to show this data like this using Plotly? The end product needs to be the same, but the way I get there can change.
If this is the best way, how do I "lower" the x/y axes to make it look like a 2D plot?
The original reason I went the route of using a surface plot is because of Matlab. When building a surface plot and rotating it to one of the planes (x/y/z), it will essentially turn into a 2D figure.
After a good walk and looking at the documentation, using:
layout = {{
...
scene: {
...
xaxis: {
...
tickangle: 0
}
}
}}
Removed the '3D' effects. I also changed the aspectratio of z to be: .001

What is SceneKit doing between calls to didApplyConstraints and willRenderScene?

The SceneKit rendering loop is well documented here https://developer.apple.com/documentation/scenekit/scnscenerendererdelegate and here https://www.raywenderlich.com/1257-scene-kit-tutorial-with-swift-part-4-render-loop. However neither of these documents explains what SceneKit does between calls to didApplyConstraints and willRenderScene.
I've modified my SCNSceneRendererDelegate to measure the time between each call and I can see that around 5ms elapses between those two calls. It isn't running my code in that time, but presumably some aspect of the way I've set up my scene is creating work which has to be done there. Any insight into what SceneKit is doing would be very helpful.
I am calling SceneKit myself from an MTKView's draw call (rather than using an SCNView) so that I can render the scene twice. The first render is normal, the second uses the depth buffer from the first but draws just a subset of the scene that I want to "glow" onto a separate colour buffer. That colour buffer is then scaled down, gaussian blurred, scaled back up and then blended over the top of the first scene (all with custom Metal shaders).
The 5ms spent between didApplyConstraints and willRenderScene started happening when I introduced this extra rendering pass. To control which nodes are in each scene I switch the opacity of a small number of parent nodes between 0 and 1. If I remove the code which switches opacity but keep everything else (so there are two rendering passes but they both draw everything) the extra 5ms disappears and the overall frame rate is actually faster even though much more rendering is happening.
I'm writing Swift targeting MacOS on a 2018 MacBook Pro.
UPDATE: mnuages has explained that changing the opacity causes SceneKit to rebuild the scene graph and it that explains part of the lost time. However I've now discovered that my use of a custom SCNProgram for the nodes in one rendering pass also triggers a 5ms pause between didApplyConstraints and willRenderScene. Does anyone know why this might be?
Here is my code for setting up the SCNProgram and the SCNMaterial, both done once:
let device = MTLCreateSystemDefaultDevice()!
let library = device.makeDefaultLibrary()
glowProgram = SCNProgram()
glowProgram.library = library
glowProgram.vertexFunctionName = "emissionGlowVertex"
glowProgram.fragmentFunctionName = "emissionGlowFragment"
...
let glowMaterial = SCNMaterial()
glowMaterial.program = glowProgram
let emissionImageProperty = SCNMaterialProperty(contents: emissionImage)
glowMaterial.setValue(emissionImageProperty, forKey: "tex")
Here's where I apply the material to the nodes:
let nodeWithGeometryClone = nodeWithGeometry.clone()
nodeWithGeometryClone.categoryBitMask = 2
let geometry = nodeWithGeometryClone.geometry!
nodeWithGeometryClone.geometry = SCNGeometry(sources: geometry.sources, elements: geometry.elements)
glowNode.addChildNode(nodeWithGeometryClone)
nodeWithGeometryClone.geometry!.firstMaterial = glowMaterial
The glow nodes are a deep clone of the regular nodes, but with an alternative SCNProgram. Here's the Metal code:
#include <metal_stdlib>
using namespace metal;
#include <SceneKit/scn_metal>
struct NodeConstants {
float4x4 modelTransform;
float4x4 modelViewProjectionTransform;
};
struct EmissionGlowVertexIn {
float3 pos [[attribute(SCNVertexSemanticPosition)]];
float2 uv [[attribute(SCNVertexSemanticTexcoord0)]];
};
struct EmissionGlowVertexOut {
float4 pos [[position]];
float2 uv;
};
vertex EmissionGlowVertexOut emissionGlowVertex(EmissionGlowVertexIn in [[stage_in]],
constant NodeConstants &scn_node [[buffer(1)]]) {
EmissionGlowVertexOut out;
out.pos = scn_node.modelViewProjectionTransform * float4(in.pos, 1) + float4(0, 0, -0.01, 0);
out.uv = in.uv;
return out;
}
constexpr sampler linSamp = sampler(coord::normalized, address::clamp_to_zero, filter::linear);
fragment half4 emissionGlowFragment(EmissionGlowVertexOut in [[stage_in]],
texture2d<half, access::sample> tex [[texture(0)]]) {
return tex.sample(linSamp, in.uv);
}
By changing the opacity of nodes you're invalidating parts of the scene graph which can result in additional work for the renderer.
It would be interesting to see if setting the camera's categoryBitMask is more performant (it doesn't modify the scene graph).

ARKit - ARRreferenceImage tracking

I'm playing around with ARReferenceImages in ARKit and I'm trying to add an SCNNode when a reference image is recognised and then leave that node in place regardless of whether the same reference image is then recognised elsewhere.
I can add my SCNode correctly, but if I move my marker it picks it up again and moves my placed node to the position of the marker.
My code to add is as follows:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
print("MAPNODE IS NIL = \(self.mapNode == nil)")
updateQueue.async {
if self.mapNode == nil {
// Create a plane to visualize the initial position of the detected image.
let plane = SCNPlane(width: 1.2912,
height: 1.2912)
let planeNode = SCNNode(geometry: plane)
planeNode.opacity = 1
/*
`SCNPlane` is vertically oriented in its local coordinate space, but
`ARImageAnchor` assumes the image is horizontal in its local space, so
rotate the plane to match.
*/
planeNode.eulerAngles.x = -.pi / 2
self.mapNode = planeNode
/*
Image anchors are not tracked after initial detection, so create an
animation that limits the duration for which the plane visualization appears.
*/
// Add the plane visualization to the scene.
node.addChildNode(planeNode)
}
}
}
reading the docs here https://developer.apple.com/documentation/arkit/recognizing_images_in_an_ar_experience#2958517 it states that
Apply Best Practices
This example app simply visualizes where ARKit detects each reference
image in the user’s environment, but your app can do much more. Follow
the tips below to design AR experiences that use image detection well.
Use detected images to set a frame of reference for the AR scene.
Instead of requiring the user to choose a place for virtual content,
or arbitrarily placing content in the user’s environment, use detected
images to anchor the virtual scene. You can even use multiple detected
images. For example, an app for a retail store could make a virtual
character appear to emerge from a store’s front door by recognizing
posters placed on either side of the door and then calculating a
position for the character directly between the posters.
Note
Use the ARSession setWorldOrigin(relativeTransform:) method to
redefine the world coordinate system so that you can place all anchors
and other content relative to the reference point you choose.
Design your AR experience to use detected images as a starting point
for virtual content. ARKit doesn’t track changes to the position or
orientation of each detected image. If you try to place virtual
content that stays attached to a detected image, that content may not
appear to stay in place correctly. Instead, use detected images as a
frame of reference for starting a dynamic scene. For example, your app
might recognize theater posters for a sci-fi film and then have
virtual spaceships appear to emerge from the posters and fly around
the environment.
So I tried setting my world transform to be equal to the transform of my image anchor
self.session.setWorldOrigin(relativeTransform: imageAnchor.transform)
However my mapNode follows the imageAnchor where ever it moves. I haven't implemented the renderer update method so I'm not sure why this keeps moving.
I'm assuming that the setWorldOrigin method is constantly updating to the imageAnchor.transform and not just that moment in time, which is weird as that code is only called once. Any ideas?
If you want to add the mapNode at the position of the ARImageAnchor you could set the position of your mapNode at the transform of the ARImageAnchor and add it to the scene but not linked to the reference image if that makes sense.
This could be done like so:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
//2. An ImageAnchor Is Only Added Once For Each Identified Target
print("Anchor ID = \(currentImageAnchor.identifier)")
//3. Add An SCNNode At The Position Of The Identified ImageTarget
let nodeHolder = SCNNode()
let nodeGeometry = SCNBox(width: 0.02, height: 0.02, length: 0.02, chamferRadius: 0)
nodeGeometry.firstMaterial?.diffuse.contents = UIColor.cyan
nodeHolder.geometry = nodeGeometry
nodeHolder.position = SCNVector3(currentImageAnchor.transform.columns.3.x,
currentImageAnchor.transform.columns.3.y,
currentImageAnchor.transform.columns.3.z)
augmentedRealityView?.scene.rootNode.addChildNode(nodeHolder)
}
In another part of your question you seem to imply that you want to detect multiple occurrences of the same image. I could be wrong but I think the only way to do this is to remove the corresponding ARImageAnchor for the reference image, which can be done like so (by adding it at the end of the last code snippet):
augmentedRealitySession.remove(anchor: currentImageAnchor)
The issue here is that once the ARImageAnchor is removed, any time it is detected again, you would have to handle whether content should be added, which is tricky since the ARImageAnchor.identifier is always the same for the referenceImage regardless of whether it is removed and then re-added thus making it difficult to store in a dictionary etc. As such depending on your needs you would then need to find a way to determine if content existed at that location and whether to re add it etc.
The last part of your question about setWorldOrigin seems a bit odd like you said, but maybe you could add a Bool to prevent it from potentially changing e.g:
var hasSetWorldOrigin = false
Then based on this you could ensure that it is only set once e.g:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
//2. If We Havent Set The World Origin Set It Based On The ImageAnchorTranform
if !hasSetWorldOrigin{
self.augmentedRealitySession.setWorldOrigin(relativeTransform: currentImageAnchor.transform)
hasSetWorldOrigin = true
//3. Create Two Nodes To Add To The Scene And Distribute Them
let nodeHolderA = SCNNode()
let nodeGeometryA = SCNBox(width: 0.04, height: 0.04, length: 0.04, chamferRadius: 0)
nodeGeometryA.firstMaterial?.diffuse.contents = UIColor.green
nodeHolderA.geometry = nodeGeometryA
let nodeHolderB = SCNNode()
let nodeGeometryB = SCNBox(width: 0.04, height: 0.04, length: 0.04, chamferRadius: 0)
nodeGeometryB.firstMaterial?.diffuse.contents = UIColor.red
nodeHolderB.geometry = nodeGeometryB
if let cameraTransform = augmentedRealitySession.currentFrame?.camera.transform{
nodeHolderA.simdPosition = float3(cameraTransform.columns.3.x,
cameraTransform.columns.3.y,
cameraTransform.columns.3.z)
nodeHolderB.simdPosition = float3(cameraTransform.columns.3.x + 0.2,
cameraTransform.columns.3.y,
cameraTransform.columns.3.z)
}
augmentedRealityView?.scene.rootNode.addChildNode(nodeHolderA)
augmentedRealityView?.scene.rootNode.addChildNode(nodeHolderB)
}
}
Hopefully my answer will provide a useful starting point to you assuming of course I have interpreted your question correctly,

How to color a scnplane with 2 different materials?

I have a SCNPlane that I created in the SceneKit editor and I want 1 side of the plane to have a certain image and the other side of the plane to have another image. How do I do that in the Scenekit editor
So far what I've tried to do is adding 2 materials to the plane. I tried adding 2 materials and unchecking double-sided but that doesn't work.
Any help would be appreciated!
Per the SCNPlane docs:
The surface is one-sided. Its surface normal vectors point in the positive z-axis direction of its local coordinate space, so it is only visible from that direction by default. To render both sides of a plane, ether set the isDoubleSided property of its material to true or create two plane geometries and orient them back to back.
That implies a plane has only one material — isDoubleSided is a property of a material, letting that one material render on both sides of a surface, but there's nothing you can do to one material to turn it into two.
If you want a flat surface with two materials, you can arrange two planes back to back as the doc suggests. Make them both children of a containing node and you can then use that to move them together. Or you could perhaps make an SCNBox that's very thin in one dimension.
Very easy to do in 2022.
It's very easy and common to do this, you just add the rear as a child.
To be clear the node (and the rear you add) should both use the single-sided shader.
Obviously, the rear you add points in the other direction!
Do note that they are indeed in "exactly the same place". Sometimes folks new to 3D mesh think the two meshes would need to be "a little apart", not so.
public var rear = SCNNode()
private var theRearPlane = SCNPlane()
private func addRear() {
addChildNode(rear)
rear.eulerAngles = SCNVector3(0, CGFloat.pi, 0)
theRearPlane. ... set width, height etc
theRearPlane.firstMaterial?.isDoubleSided = false
rear.geometry = theRearPlane
rear.geometry?.firstMaterial!.diffuse.contents = .. your rear image/etc
}
So ...
///Double-sided sprite
class SCNTwoSidedNode: SCNNode {
public var rear = SCNNode()
private var thePlane = SCNPlane()
override init() {
super.init()
thePlane. .. set size, etc
thePlane.firstMaterial?.isDoubleSided = false
thePlane.firstMaterial?.transparencyMode = .aOne
geometry = thePlane
addRear()
}
Consuming code can just refer to .rear , for example,
playerNode. ... the drawing of the Druid
playerNode.rear. ... Druid rules and abilities text
enemyNode. ... the drawing of the Mage
enemyNode.rear. ... Mage rules and abilities text
If you want to do this in the visual editor - very easy
It's trivial. Simply add the rear as a child. Rotate the child 180 degrees on Y.
It's that easy.
Make them both single-sided and put anything you want on the front and rear.
Simply move the main one (the front) normally and everything works.

Openlayers 3 Circle radius in meters

How to get Circle radius in meters
May be this is existing question, but i am not getting proper result. I am trying to create Polygon in postgis with same radius & center getting from openlayers circle.
To get radius in meters I followed this.
Running example link.
var radiusInMeters = circleRadius * ol.proj.METERS_PER_UNIT['m'];
After getting center, radius (in meters) i am trying to generate Polygon(WKT) with postgis (server job) & drawing that feature in map like this.
select st_astext(st_buffer('POINT(79.25887485937808 17.036647682474722 0)'::geography, 365.70644956827164));
But both are not covering same area. Can any body please let me know where i am doing wrong.
Basically my input/output to/from Circle will be in meters only.
ol.geom.Circle might not represent a circle
OpenLayers Circle geometries are defined on the projected plane. This means that they are always circular on the map, but the area covered might not represent an actual circle on earth. The actual shape and size of the area covered by the circle will depend on the projection used.
This could be visualized by Tissot's indicatrix, which shows how circular areas on the globe are transformed when projected onto a plane. Using the projection EPSG:3857, this would look like:
The image is from OpenLayer 3's Tissot example and displays areas that all have a radius of 800 000 meters. If these circles were drawn as ol.geom.Circle with a radius of 800000 (using EPSG:3857), they would all be the same size on the map but the ones closer to the poles would represent a much smaller area of the globe.
This is true for most things with OpenLayers geometries. The radius, length or area of a geometry are all reported in the projected plane.
So if you have an ol.geom.Circle, getting the actual surface radius would depend on the projection and features location. For some projections (such as EPSG:4326), there would not be an accurate answer since the geometry might not even represent a circular area.
However, assuming you are using EPSG:3857 and not drawing extremely big circles or very close to the poles, the Circle will be a good representation of a circular area.
ol.proj.METERS_PER_UNIT
ol.proj.METERS_PER_UNIT is just a conversion table between meters and some other units. ol.proj.METERS_PER_UNIT['m'] will always return 1, since the unit 'm' is meters. EPSG:3857 uses meters as units, but as noted they are distorted towards the poles.
Solution (use after reading and understanding the above)
To get the actual on-the-ground radius of an ol.geom.Circle, you must find the distance between the center of the circle and a point on it's edge. This could be done using ol.Sphere:
var center = geometry.getCenter()
var radius = geometry.getRadius()
var edgeCoordinate = [center[0] + radius, center[1]];
var wgs84Sphere = new ol.Sphere(6378137);
var groundRadius = wgs84Sphere.haversineDistance(
ol.proj.transform(center, 'EPSG:3857', 'EPSG:4326'),
ol.proj.transform(edgeCoordinate, 'EPSG:3857', 'EPSG:4326')
);
More options
If you wish to add a geometry representing a circular area on the globe, you should consider using the method used in the Tissot example above. That is, defining a regular polygon with enough points to appear smooth. That would make it transferable between projections, and appears to be what you are doing server side. OpenLayers 3 enables this by ol.geom.Polygon.circular:
var circularPolygon = ol.geom.Polygon.circular(wgs84Sphere, center, radius, 64);
There is also ol.geom.Polygon.fromCircle, which takes an ol.geom.Circle and transforms it into a Polygon representing the same area.
My answer is a complement of the great answer by Alvin.
Imagine you want to draw a circle of a given radius (in meters) around a point feature. In my particular case, a 200m circle around a moving vehicle.
If this circle has a small diameter (< some kilometers), you can ignore earth roudness. Then, you can use the marker "Circle" in the style function of your point feature.
Here is my style function :
private pointStyle(feature: Feature, resolution: number): Array<Style> {
const viewProjection = map.getView().getProjection();
const coordsInViewProjection = (<Point>(feature.getGeometry())).getCoordinates();
const longLat = toLonLat(coordsInViewProjection, viewProjection);
const latitude_rad = longLat[1] * Math.PI / 180.;
const circle = new Style({
image: new CircleStyle({
stroke: new Stroke({color: '#7c8692'});,
radius: this._circleRadius_m / (resolution / viewProjection.getMetersPerUnit() * Math.cos(latitude_rad)),
}),
});
return [circle];
}
The trick is to scale the radius by the latitude cosine. This will "locally" disable the distortion effect we can observe in the Tissot Example.

Resources