I’m trying to rotate and move a SCNNode. It is acting as if I’m still rotating in local space rather than world space. If I rotate the item and then apply force, it moves in the same direction as before, just pointing in a new direction, rather than ‘forward’ being redefined by the rotation. What am I doing wrong? Thanks.
func applyForce(to node: SCNNode) {
let newTransform = SCNMatrix4Translate(node.worldTransform, force.x, force.y, force.z)
node.setWorldTransform(newTransform)
}
func applyRotation(to node: SCNNode, radians: Float) {
let newTransform = SCNMatrix4Rotate(node.worldTransform, radians, 0, 1, 0)
node.setWorldTransform(newTransform)
}
I've found an answer. I perform the rotation in local space and translate the node in world space. The nil argument in simdConvertVector converts to world. This accomplishes my goal.
```
func applyForce(_ force: SCNVector3, to node: SCNNode) {
let force = SIMD3<Float>(force)
let translation = SIMD3<Float>(1, 1, 1) * force
let newPosition = node.simdConvertVector(translation, to: nil)
node.simdPosition += newPosition
}
func applyRotation(_ radians: Float, to node: SCNNode) {
let newTransform = SCNMatrix4Rotate(node.transform, radians, 0, 1, 0)
node.transform = newTransform
}
```
Related
I am attempting to find a simple way in SceneKit to calculate the depth of a pixels in SceneKit and LiDAR data from
sceneView.session.currentFrame?.smoothedSceneDepth?.depthMap
Ideally I don't want to use metal shaders. I would prefer find a points in my currentFrame and their corresponding depth map, to get the depth of a points in SceneKit (ideally in world coordinates, not just local to that frustum at that point in time).
Fast performance isn't necessary as it won't be calculated at capture.
I am aware of the Apple project at link, however this is far too complex for my needs.
As a starting point, my code works like this:
guard let depthData = frame.sceneDepth else { return }
let camera = frame.camera
let depthPixelBuffer = depthData.depthMap
let depthHeight = CVPixelBufferGetHeight(depthPixelBuffer)
let depthWidth = CVPixelBufferGetWidth(depthPixelBuffer)
let resizeScale = CGFloat(depthWidth) / CGFloat(CVPixelBufferGetWidth(frame.capturedImage))
let resizedColorImage = frame.capturedImage.toCGImage(scale: resizeScale);
guard let colorData = resizedColorImage.pixelData() else {
fatalError()
}
var intrinsics = camera.intrinsics;
let referenceDimensions = camera.imageResolution;
let ratio = Float(referenceDimensions.width) / Float(depthWidth)
intrinsics.columns.0[0] /= ratio
intrinsics.columns.1[1] /= ratio
intrinsics.columns.2[0] /= ratio
intrinsics.columns.2[1] /= ratio
var points: [SCNVector3] = []
let depthValues = depthPixelBuffer.depthValues()
for vv in 0..<depthHeight {
for uu in 0..<depthWidth {
let z = -depthValues[uu + vv * depthWidth]
let x = Float32(uu) / Float32(depthWidth) * 2.0 - 1.0;
let y = 1.0 - Float32(vv) / Float32(depthHeight) * 2.0;
points.append(SCNVector3(x, y, z))
}
}
The resulting point cloud looks ok, but is severely bent on the Z-axis. I realize this code is also not adjusting for screen orientation either.
Cupertino kindly got back to me with this response on the forums at developer.apple.com:
The unprojection calculation itself is going to be identical, regardless of whether it is done CPU side or GPU side.
CPU side, the calculation would look something like this:
/// Returns a world space position given a point in the camera image, the eye space depth (sampled/read from the corresponding point in the depth image), the inverse camera intrinsics, and the inverse view matrix.
func worldPoint(cameraPoint: SIMD2<Float>, eyeDepth: Float, cameraIntrinsicsInversed: simd_float3x3, viewMatrixInversed: simd_float4x4) -> SIMD3<Float> {
let localPoint = cameraIntrinsicsInversed * simd_float3(cameraPoint, 1) * -eyeDepth
let worldPoint = viewMatrixInversed * simd_float4(localPoint, 1);
return (worldPoint / worldPoint.w)[SIMD3(0,1,2)];
}
Implemented, this looks like
for vv in 0..<depthHeight {
for uu in 0..<depthWidth {
let z = -depthValues[uu + vv * depthWidth]
let viewMatInverted = (sceneView.session.currentFrame?.camera.viewMatrix(for: UIApplication.shared.statusBarOrientation))!.inverse
let worldPoint = worldPoint(cameraPoint: SIMD2(Float(uu), Float(vv)), eyeDepth: z, cameraIntrinsicsInversed: intrinsics.inverse, viewMatrixInversed: viewMatInverted * rotateToARCamera )
points.append(SCNVector3(worldPoint))
}
}
The point cloud is pretty messy, needs confidence worked out, and there are gaps vertically where Int rounding has occurred, but it's a solid start. Missing functions come from the link to the Apple demo project in the question above.
I have a following code. It contains getPointAndPos function that needs to be as fast as possible:
struct Point {
let x: Int
let y: Int
}
struct PointAndPosition {
let pnt: Point
let pos: Int
}
class Elements {
var points: [Point]
init(points: [Point]) {
self.points = points
}
func addPoint(x: Int, y: Int) {
points.append(Point(x: x, y: y))
}
func getPointAndPos(pos: Int) -> PointAndPosition? {
guard pos >= 0 && points.count > pos else {
return nil
}
return PointAndPosition(pnt: points[pos], pos: pos)
}
}
However, due to Swift memory management it is not fast at all. I used to use dictionary, but it was even worse. This function is heavily used in the application, so it is the main bottleneck now. Here are the profiling results for getPointAndPos function:
As you can see it takes ~4.5 seconds to get an item from array, which is crazy. I tried to follow all performance optimization techniques that I could find, namely:
Using Array instead of Dictionary
Using simple types as Array elements (struct in my case)
It helped, but it is not enough. Is there a way to optimize it even further considering that I do not change elements from array after they are added?
UPDATE #1:
As suggested I replaced [Point] array with [PointAndPosition] one and removed optionals, which made the code 6 times faster. Also, as requested providing the code which uses getPointAndPos function:
private func findPoint(el: Elements, point: PointAndPosition, curPos: Int, limit: Int, halfLevel: Int, incrementFunc: (Int) -> Int) -> PointAndPosition? {
guard curPos >= 0 && curPos < el.points.count else {
return nil
}
// get and check point here
var next = curPos
while true {
let pnt = el.getPointAndPos(pos: next)
if checkPoint(pp: point, pnt: pnt, halfLevel: halfLevel) {
return pnt
} else {
next = incrementFunc(next)
if (next != limit) {
continue //then findPoint next limit incrementFunc
}
break
}
}
return nil
}
Current implementation is much faster, but ideally I need to make it 30 times faster than it is now. Not sure if it is even possible. Here is the latest profiling result:
I suspect you're creating a PointAndPosition and then immediately throwing it away. That's the thing that's going to create a lot of memory churn. Or you're creating a lot of duplicate PointAndPosition values.
First make sure that this is being built in Release mode with optimizations. ARC can often remove a lot of unnecessary retains and releases when optimized.
If getPointAndPos has to be as fast as possible, then the data should be stored in the form it wants, which is an array of PointAndPosition:
class Elements {
var points: [PointAndPosition]
init(points: [Point]) {
self.points = points.enumerated().map { PointAndPosition(pnt: $0.element, pos: $0.offset) }
}
func addPoint(x: Int, y: Int) {
points.append(PointAndPosition(pnt: Point(x: x, y: y), pos: points.endIndex))
}
func getPointAndPos(pos: Int) -> PointAndPosition? {
guard pos >= 0 && points.count > pos else {
return nil
}
return points[pos]
}
}
I'd take this a step further and reduce getPointAndPos to this:
func getPointAndPos(pos: Int) -> PointAndPosition {
points[pos]
}
If this is performance critical, then bounds checks should already have been done, and you shouldn't need an Optional here.
I'd also be very interested in the code that calls this. That may be more the issue than this code. It's possible you're calling getPointAndPos more often than you need to. (Though getting rid of the struct creation will make that less important.)
I have a bunch of markers placed in my scene as childNodes at fixed node positions in 3D world. When I move the phone around, I need to determine which marker node is the closest to the 2D screen center, so I can get the text description corresponding to that node and display it.
Right now, in a renderloop, I just determined the distance of each node from the screen center in a forEach loop, and decide if that distance is <150, if so get title and copy of that node. However, this doesn't solve my problem because there could be multiple nodes that satisfy that condition. I need to compare the distances from the center across all the nodes and get that one node that's is closest
func renderer(_ renderer: SCNSceneRenderer, willRenderScene scene: SCNScene, atTime time: TimeInterval){
scene.rootNode.childNodes.filter{ $0.name != nil }.forEach{ node in
guard let pointOfView = sceneView.pointOfView else { return }
let isVisible = sceneView.isNode(node, insideFrustumOf: pointOfView)
if isVisible {
let nodePos = sceneView.projectPoint(node.position)
let nodeScreenPos = CGPoint(x: CGFloat(nodePos.x), y: CGFloat(nodePos.y))
let distance = CGPointDistance(from: nodeScreenPos, to: view.center)
if distance < 150.0 {
print("display description of: \(node.name!)")
guard let title = ar360Experience?.threeSixtyHotspot?[Int(node.name!)!].title else { return }
guard let copy = ar360Experience?.threeSixtyHotspot?[Int(node.name!)!].copy else { return }
titleLabel.text = title
copyLabel.text = copy
cardView.isHidden = false
}else {
cardView.isHidden = true
}
}
}
}
There are various ways to do this. Like iterating all the nodes to find their distance. But like you said this becomes inefficient.
What you can do is to store your node data in a different format using something like GKQuadTree... https://developer.apple.com/documentation/gameplaykit/gkquadtree
This is a GameplayKit That will allow you to much more quickly iterate the data set so that you can find the node closest to the centre.
It works by breaking down the area into four (hence quad) sections and storing nodes into one of those sections storing the rect of the section too. It then breaks down each of those four sections into four more and so on.
So when you ask for the nodes closest to a given point it can quickly eliminate the majority of the nodes.
I have a main node, with a sequence of childNodes. The childNodes are firePoints, so as the target comes into view, the main node rotates to target and the firepoint is an offset where I need to shoot from. It works fine if I target via some vector classes I built, but it is smoother if I use SCNConstraint on the main node. The main node (and firepoints) rotate to target, but the fire points vector values do not ever change when convertPosition is called. I can see that the fireNodes are rotating along with the base node properly. Thanks
func shoot()
{
isShooting = true
// Convert position so that projectile fires from FirePoint
let fireNode = gNodes.getNode(vName: attr.name + "FirePoint" + "\(attr.firePointsSequence)", vRequired: true, vError: "FP0-Sproj")
let fireNodeStart = gNodes.gameNodes.convertPosition(fireNode.presentation.position, from: attr.node)
print("FireNodePosition: \(fireNodeStart)
}
func setTarget()
{
attr.node.constraints = []
let vConstraint = SCNLookAtConstraint(target: targetNode)
vConstraint.isGimbalLockEnabled = true
attr.node.constraints = [vConstraint]
}
I had this problem. What solved it for me was using SCN's presentation scn. It has up to date positions.
I'd like to use a Metal compute shader to calculate some positions that are then fed into a Metal shader. Sounds straight forward, but I'm having trouble getting my MTLBuffer data into the Metal based SCNProgram.
The compute kernel is as follows, in this contrived example it's taking in three 3D vectors (in both buffers).
kernel void doSimple(const device float3 *inVector [[ buffer(0) ]],
device float3 *outVector [[ buffer(1) ]],
uint id [[ thread_position_in_grid ]]) {
float yDisplacement = 0;
. . . .
outVector[id] = float3(
inVector[id].x,
inVector[id].y + yDisplacement,
inVector[id].z);
}
This kernel function is run each frame in the - renderer:willRenderScene:atTime: method of my SCNSceneRendererDelegate. There's two buffers, and they get switched after each frame.
Buffers are created as follows;
func setupBuffers() {
positions = [vector_float3(0,0,0), vector_float3(1,0,0), vector_float3(2,0,0)]
let bufferSize = sizeof(vector_float3) * positions.count
//copy same data into two different buffers for initialisation
buffer1 = device.newBufferWithBytes(&positions, length: bufferSize, options: .OptionCPUCacheModeDefault)
buffer2 = device.newBufferWithBytes(&positions, length: bufferSize, options: .OptionCPUCacheModeDefault)
}
And the compute shader is run using the following (in the willRenderScene func);
let computeCommandBuffer = commandQueue.commandBuffer()
let computeCommandEncoder = computeCommandBuffer.computeCommandEncoder()
computeCommandEncoder.setComputePipelineState(pipelineState)
computeCommandEncoder.setBuffer(buffer1, offset: 0, atIndex: 0)
computeCommandEncoder.setBuffer(buffer2, offset: 0, atIndex: 1)
computeCommandEncoder.dispatchThreadgroups(numThreadgroups, threadsPerThreadgroup: threadsPerGroup)
computeCommandEncoder.endEncoding()
computeCommandBuffer.commit()
computeCommandBuffer.waitUntilCompleted()
let bufferSize = positions.count*sizeof(vector_float3)
var data = NSData(bytesNoCopy: buffer2.contents(), length: bufferSize, freeWhenDone: false)
var resultArray = [vector_float3](count: positions.count, repeatedValue: vector_float3(0,0,0))
data.getBytes(&resultArray, length:bufferSize)
for outPos in resultArray {
print(outPos.x, ", ", outPos.y, ", ", outPos.z)
}
This works, and I can see my compute shader is updating the y coordinate for each vector in the array.
This scene consists of three spheres evenly spaced. The vertex shader simply takes the position calculated in the compute shader and adds it to each vertex position (well the y component anyway). I give each sphere an index, the vertex shader uses this index to pull the appropriate position out of my computed array.
The Metal vertex function is shown below, it's referenced by a SCNProgram and set to the material of each sphere.
vertex SimpleVertex simpleVertex(SimpleVertexInput in [[ stage_in ]],
constant SCNSceneBuffer& scn_frame [[buffer(0)]],
constant MyNodeBuffer& scn_node [[buffer(1)]],
constant MyPositions &myPos [[buffer(2)]],
constant uint &index [[buffer(3)]]
)
{
SimpleVertex vert;
float3 posOffset = myPos.positions[index];
float3 pos = float3(in.position.x,
in.position.y + posOffset.y,
in.position.z);
vert.position = scn_node.modelViewProjectionTransform * float4(pos,1.0);
return vert;
}
MyPositions is a simple struct containing an array of float3s.
struct MyPositions
{
float3 positions[3];
};
I have no problem passing data to the vertex shader using the setValue method of each sphere's material as shown below (also done in the willRenderScene method). Everything works as expected (the three spheres move upwards).
var i0:UInt32 = 0
let index0 = NSData(bytes: &i0, length: sizeof(UInt32))
sphere1Mat.setValue(index0, forKey: "index")
sphere1Mat.setValue(data, forKey: "myPos")
BUT this requires the data be copied from the GPU to CPU to GPU and is really something I'd rather avoid. So my question is... How do I pass a MTLBuffer to a SCNProgram?
Have tried the following in willRenderScene but get nothing but EXEC_BAD...
let renderCommandEncoder = renderer.currentRenderCommandEncoder!
renderCommandEncoder.setVertexBuffer(buffer2, offset: 0, atIndex: 2)
renderCommandEncoder.endEncoding()
Complete example is over on GitHub.
Thanks for reading, been struggling with this one. Workaround is to use a MTLTexture in place of a MTLBuffer as I've been able to pass these into an SCNProgram via the diffuse mat prop.
just switch the bindings of the buffers from step to step.
step1
computeCommandEncoder.setBuffer(buffer1, offset: 0, atIndex: 0)
computeCommandEncoder.setBuffer(buffer2, offset: 0, atIndex: 1)
step2
computeCommandEncoder.setBuffer(buffer1, offset: 0, atIndex: 1)
computeCommandEncoder.setBuffer(buffer2, offset: 0, atIndex: 0)
step3
computeCommandEncoder.setBuffer(buffer1, offset: 0, atIndex: 0)
computeCommandEncoder.setBuffer(buffer2, offset: 0, atIndex: 1)
and so on ...
the out buffer becomes the new in buffer and vice versa ...