What is SceneKit doing between calls to didApplyConstraints and willRenderScene? - scenekit

The SceneKit rendering loop is well documented here https://developer.apple.com/documentation/scenekit/scnscenerendererdelegate and here https://www.raywenderlich.com/1257-scene-kit-tutorial-with-swift-part-4-render-loop. However neither of these documents explains what SceneKit does between calls to didApplyConstraints and willRenderScene.
I've modified my SCNSceneRendererDelegate to measure the time between each call and I can see that around 5ms elapses between those two calls. It isn't running my code in that time, but presumably some aspect of the way I've set up my scene is creating work which has to be done there. Any insight into what SceneKit is doing would be very helpful.
I am calling SceneKit myself from an MTKView's draw call (rather than using an SCNView) so that I can render the scene twice. The first render is normal, the second uses the depth buffer from the first but draws just a subset of the scene that I want to "glow" onto a separate colour buffer. That colour buffer is then scaled down, gaussian blurred, scaled back up and then blended over the top of the first scene (all with custom Metal shaders).
The 5ms spent between didApplyConstraints and willRenderScene started happening when I introduced this extra rendering pass. To control which nodes are in each scene I switch the opacity of a small number of parent nodes between 0 and 1. If I remove the code which switches opacity but keep everything else (so there are two rendering passes but they both draw everything) the extra 5ms disappears and the overall frame rate is actually faster even though much more rendering is happening.
I'm writing Swift targeting MacOS on a 2018 MacBook Pro.
UPDATE: mnuages has explained that changing the opacity causes SceneKit to rebuild the scene graph and it that explains part of the lost time. However I've now discovered that my use of a custom SCNProgram for the nodes in one rendering pass also triggers a 5ms pause between didApplyConstraints and willRenderScene. Does anyone know why this might be?
Here is my code for setting up the SCNProgram and the SCNMaterial, both done once:
let device = MTLCreateSystemDefaultDevice()!
let library = device.makeDefaultLibrary()
glowProgram = SCNProgram()
glowProgram.library = library
glowProgram.vertexFunctionName = "emissionGlowVertex"
glowProgram.fragmentFunctionName = "emissionGlowFragment"
...
let glowMaterial = SCNMaterial()
glowMaterial.program = glowProgram
let emissionImageProperty = SCNMaterialProperty(contents: emissionImage)
glowMaterial.setValue(emissionImageProperty, forKey: "tex")
Here's where I apply the material to the nodes:
let nodeWithGeometryClone = nodeWithGeometry.clone()
nodeWithGeometryClone.categoryBitMask = 2
let geometry = nodeWithGeometryClone.geometry!
nodeWithGeometryClone.geometry = SCNGeometry(sources: geometry.sources, elements: geometry.elements)
glowNode.addChildNode(nodeWithGeometryClone)
nodeWithGeometryClone.geometry!.firstMaterial = glowMaterial
The glow nodes are a deep clone of the regular nodes, but with an alternative SCNProgram. Here's the Metal code:
#include <metal_stdlib>
using namespace metal;
#include <SceneKit/scn_metal>
struct NodeConstants {
float4x4 modelTransform;
float4x4 modelViewProjectionTransform;
};
struct EmissionGlowVertexIn {
float3 pos [[attribute(SCNVertexSemanticPosition)]];
float2 uv [[attribute(SCNVertexSemanticTexcoord0)]];
};
struct EmissionGlowVertexOut {
float4 pos [[position]];
float2 uv;
};
vertex EmissionGlowVertexOut emissionGlowVertex(EmissionGlowVertexIn in [[stage_in]],
constant NodeConstants &scn_node [[buffer(1)]]) {
EmissionGlowVertexOut out;
out.pos = scn_node.modelViewProjectionTransform * float4(in.pos, 1) + float4(0, 0, -0.01, 0);
out.uv = in.uv;
return out;
}
constexpr sampler linSamp = sampler(coord::normalized, address::clamp_to_zero, filter::linear);
fragment half4 emissionGlowFragment(EmissionGlowVertexOut in [[stage_in]],
texture2d<half, access::sample> tex [[texture(0)]]) {
return tex.sample(linSamp, in.uv);
}

By changing the opacity of nodes you're invalidating parts of the scene graph which can result in additional work for the renderer.
It would be interesting to see if setting the camera's categoryBitMask is more performant (it doesn't modify the scene graph).

Related

Modify Metal fragment shading based on vertex world position Y coordinate

I am trying to use a Metal fragment shader with SCNTechnique to modify the fragment color based on the vertex Y world position.
My understanding so far
SCNTechnique can be configured with a sequence of render passes. A render pass allows for injection of a vertex and a fragment shader. These shaders are written in Metal. The Metal Shading Language Specification describes what inputs/outputs are supported for these two.
The vertex shader is called for every vertex that's being rendered. We can pass additional information from the vertex shader to the fragment shader (like position in 3D space, see MSLS section 5.2).
The fragment shader is closest to a pixel, and might be called multiples time for a single "pixel", if there are multiple triangles that "qualify" for that pixel. (Usually) after fragment shading, a fragment might be discarded if it fails the depth or stencil test.
What I attempted
This is what I attempted. (I hope it makes clear where my understanding is lacking).
struct VertexOut {
float4 position [[position]];
};
vertex VertexOut innerVertexShader(VertexIn in [[stage_in]]) {
VertexOut out;
out.position = in.position;
return out;
};
fragment half4 innerFragmentShader(VertexOut in [[stage_in]],
half4 color [[color(0)]]) {
half4 output;
output = color; // test to see if getting rendered color works
output.g = in.position.y; // test to see if getting y works
return output;
}
These shaders are referenced inside an SCNTechnique dictionary.
[
"passes": [
"innerPass: [
"draw": "DRAW_NODE",
"node": "inner",
"metalVertexShader": "innerVertexShader",
"metalFragmentShader": "innerFragmentShader"
]
],
"sequence": ["innerPass"],
"symbols": [:],
"targets": [:],
]
// ...
let technique = SCNTechnique(dictionary: techniqueDictionary)
This does the following: the technique is instantiated correctly and attached to the scene (because it affects the rendering). But it appears to not apply the camera transform or node position transform to the vertices. And instead renders each node as being viewed from (0,0,1) at position (0,0,0). The colors are wrong. If I remove the shaders from the SCNTechnique, every renders like I would expect.
How can I leverage regular SceneKit behavior (camera transform etc.), and only modify the color output based on the fragments' y world position? I'd expect that needs to happen on a fragment level, using the world position somehow obtained in the vertex shader. I have searched for things like "Metal basic vertex shader" and have come up with naught.
I have seen shaders like this but I'm convinced I should be able to rely on SceneKit rendering for stuff like lighting, PBR materials, camera transforms, etc. At this point I feel like whenever I search for some Metal topic, I end up on the same websites which haven't succeeded yet in taking my understanding to the next level. So, any new/additional resources are appreciated as well.
Background
For the past two months I have been working on my own game project, which uses SceneKit as the main graphics framework. I have turned to SCNTechnique and Metal shaders for custom effects. These last two in particular have given me solid headaches, both on the lack of sample code/documentation/runtime feedback.
I have considered moving to Unity/Unreal or even cancelling this project altogether because of this. But because I'm stubborn and also because I really don't want to port my Swift code to C#/C++, I haven't given up on SceneKit yet.
Having spent the past couple of days investigating this topic, my understanding of vertex and fragment shading and how SceneKit tackles these things has developed significantly.
As #mnuages pointed out in a comment, for this use case shader modifiers are the way to go. They leverage default SceneKit shading (as asked by OP) and allow for shader code injection.
Additional information
To compensate for some of the limitations of SceneKit documentation, I’ll elaborate a bit for other people looking into the subject.
For more information on how the shader modifiers tie into SceneKit default vertex/fragment shaders, see my answer to a related question or SceneKit's default shaders. The second link demonstrates the extent of SceneKit’s rendering logic that you get for free when leveraging shader modifiers instead of writing your own shader.
This page helped me build an understanding of the different stages of vector transforms from vertex to fragment (model space ➡️ world space ➡️ camera space ➡️ projection space).
Alternate approach (custom shader)
If you want to have single pass with a fully customized shader, this is a simple example. It passes the world y position from the vertex shader to the fragment shader.
// Shaders.metal file in your Xcode project
#include <metal_stdlib>
using namespace metal;
#include <SceneKit/scn_metal>
typedef struct {
float4x4 modelTransform;
float4x4 modelViewTransform;
} commonprofile_node;
struct VertexIn {
float3 position [[attribute(SCNVertexSemanticPosition)]];
};
struct VertexOut {
float4 fragmentPosition [[position]];
float height;
};
vertex VertexOut myVertex(
VertexIn in [[stage_in]],
constant SCNSceneBuffer& scn_frame [[buffer(0)]],
constant commonprofile_node& scn_node [[buffer(1)]]
) {
VertexOut out;
float4 position = float4(in.position, 1.f);
out.fragmentPosition = scn_frame.viewProjectionTransform * scn_node.modelTransform * position;
// store world position for fragment shading
out.height = (scn_node.modelTransform * position).y;
return out;
}
fragment half4 myFragment(VertexOut in [[stage_in]]) {
return half4(in.height);
}
let dictionary: [String: Any] = [
"passes" : [
"y" : [
"draw" : "DRAW_SCENE",
"inputs" : [:],
"outputs" : [
"color" : "COLOR"
],
"metalVertexShader": "myVertex",
"metalFragmentShader": "myFragment",
]
],
"sequence" : ["y"],
"symbols" : [:]
]
let technique = SCNTechnique(dictionary: dictionary)
scnView.technique = technique
You could combine this render pass with other passes (see SCNTechnique).

How do I use SDL_LockTexture() to update dirty rectangles?

I'm migrating an application from SDL 1.2 to 2.0, and it keeps an array of dirty rectangles to determine which parts of its SDL_Surface to draw to the screen. I'm trying to find the best way to integrate this with SDL 2's SDL_Texture.
Here's how the SDL 1.2 driver is working: https://gist.github.com/nikolas/1bb8c675209d2296a23cc1a395a32a0d
And here's how I'm getting changes from the surface to the texture in SDL 2:
void *pixels;
int pitch;
SDL_LockTexture(_sdl_texture, NULL, &pixels, &pitch);
memcpy(
pixels, _sdl_surface->pixels,
pitch * _sdl_surface->h);
SDL_UnlockTexture(_sdl_texture);
for (int i = 0; i < num_dirty_rects; i++) {
SDL_RenderCopy(
_sdl_renderer, _sdl_texture, &_dirty_rects[i], &_dirty_rects[i]);
}
SDL_RenderPresent(_sdl_renderer);
I'm just updating the entire surface, but then taking advantage of the dirty rectangles in the RenderCopy(). Is there a better way to do things here, only updating the dirty rectangles? Will I run into problems calling SDL_LockTexture and UnlockTexture up to a hundred times every frame, or is that how they're meant to be used?
SDL_LockTexture accepts an SDL_Rect param which I could use here, but then it's unclear to me how to get the appropriate rect from _sdl_surface->pixels. How would I copy out just a small rect from this pixel data of the entire screen?

How to color a scnplane with 2 different materials?

I have a SCNPlane that I created in the SceneKit editor and I want 1 side of the plane to have a certain image and the other side of the plane to have another image. How do I do that in the Scenekit editor
So far what I've tried to do is adding 2 materials to the plane. I tried adding 2 materials and unchecking double-sided but that doesn't work.
Any help would be appreciated!
Per the SCNPlane docs:
The surface is one-sided. Its surface normal vectors point in the positive z-axis direction of its local coordinate space, so it is only visible from that direction by default. To render both sides of a plane, ether set the isDoubleSided property of its material to true or create two plane geometries and orient them back to back.
That implies a plane has only one material — isDoubleSided is a property of a material, letting that one material render on both sides of a surface, but there's nothing you can do to one material to turn it into two.
If you want a flat surface with two materials, you can arrange two planes back to back as the doc suggests. Make them both children of a containing node and you can then use that to move them together. Or you could perhaps make an SCNBox that's very thin in one dimension.
Very easy to do in 2022.
It's very easy and common to do this, you just add the rear as a child.
To be clear the node (and the rear you add) should both use the single-sided shader.
Obviously, the rear you add points in the other direction!
Do note that they are indeed in "exactly the same place". Sometimes folks new to 3D mesh think the two meshes would need to be "a little apart", not so.
public var rear = SCNNode()
private var theRearPlane = SCNPlane()
private func addRear() {
addChildNode(rear)
rear.eulerAngles = SCNVector3(0, CGFloat.pi, 0)
theRearPlane. ... set width, height etc
theRearPlane.firstMaterial?.isDoubleSided = false
rear.geometry = theRearPlane
rear.geometry?.firstMaterial!.diffuse.contents = .. your rear image/etc
}
So ...
///Double-sided sprite
class SCNTwoSidedNode: SCNNode {
public var rear = SCNNode()
private var thePlane = SCNPlane()
override init() {
super.init()
thePlane. .. set size, etc
thePlane.firstMaterial?.isDoubleSided = false
thePlane.firstMaterial?.transparencyMode = .aOne
geometry = thePlane
addRear()
}
Consuming code can just refer to .rear , for example,
playerNode. ... the drawing of the Druid
playerNode.rear. ... Druid rules and abilities text
enemyNode. ... the drawing of the Mage
enemyNode.rear. ... Mage rules and abilities text
If you want to do this in the visual editor - very easy
It's trivial. Simply add the rear as a child. Rotate the child 180 degrees on Y.
It's that easy.
Make them both single-sided and put anything you want on the front and rear.
Simply move the main one (the front) normally and everything works.

DirectX9 Use Geometry Instancing for a Mesh with multiple materials

I am trying to have a flexible Geometry Instancing code able to handle meshes with multiple materials. For a mesh with one material everything is fine. I manage to render as many instances as I want with a single draw call.
Things get a bit more complicated with multiple materials. My mesh comes from an .x file. It has one vertex buffer, one index buffer but several materials. The indexes to render for each subset (materials) is stored in an attribute array.
Here is the code I use:
d3ddev->SetVertexDeclaration( m_vertexDeclaration );
d3ddev->SetIndices( m_indexBuffer );
d3ddev->SetStreamSourceFreq(0, (D3DSTREAMSOURCE_INDEXEDDATA | m_numInstancesToDraw ));
d3ddev->SetStreamSource(0, m_vertexBuffer, 0, D3DXGetDeclVertexSize( m_geometryElements, 0 ) );
d3ddev->SetStreamSourceFreq(1, (D3DSTREAMSOURCE_INSTANCEDATA | 1ul));
d3ddev->SetStreamSource(1, m_instanceBuffer, 0, D3DXGetDeclVertexSize( m_instanceElements, 1 ) );
m_effect->Begin(NULL, NULL); // begin using the effect
m_effect->BeginPass(0); // begin the pass
for( DWORD i = 0; i < m_numMaterials; ++i ) // loop through each subset.
{
d3ddev->SetMaterial(&m_materials[i]); // set the material for the subset
if(m_textures[i] != NULL)
{
d3ddev->SetTexture( 0, m_textures[i] );
}
d3ddev->DrawIndexedPrimitive(
D3DPT_TRIANGLELIST, // Type
0, // BaseVertexIndex
m_attributes[i].VertexStart, // MinIndex
m_attributes[i].VertexCount, // NumVertices
m_attributes[i].FaceStart * 3, // StartIndex
m_attributes[i].FaceCount // PrimitiveCount
);
}
m_effect->EndPass();
m_effect->End();
d3ddev->SetStreamSourceFreq(0,1);
d3ddev->SetStreamSourceFreq(1,1);
This code will work for the first material only. When I say the first I meant the one at index 0 because if I start my loop with the second material, it will not be rendered. However, by debugging the vertex buffer in PIX, I can see all my materials being processed properly. So something happens after the vertex shader.
Another weird issue, all my materials will be rendered if I set my stream source containing the instance data to be a vertex size of zero.
So Instead of this:
d3ddev->SetStreamSource(1, m_instanceBuffer, 0, D3DXGetDeclVertexSize( m_instanceElements, 1 ) );
I replace it by:
d3ddev->SetStreamSource(1, m_instanceBuffer, 0, 0 );
But of course, with this code, all my instances are rendered at the same position since I reuse the same instance data over and over again.
And last point, everything works fine if I create my device with D3DCREATE_SOFTWARE_VERTEXPROCESSING. Only Hardware has the issue but unfortunately DirectX does not report any problem in debug mode.
See the Shader Model 3 docs
If you are implementing shaders in hardware, you may not use vs_3_0 or ps_3_0 with any other shader versions, and you may not use either shader type with the fixed function pipeline. These changes make it possible to simplify drivers and the runtime. The only exception is that software-only vs_3_0 shaders may be used with any pixel shader version.
I had the same problem and in my case, the problem was with pool for instancing mesh. I originally had this mesh in SYSTEM_MEMORY, but the instanced mesh in POOL_DEFAULT. When I changed instancing mesh to sit in a default mem, everything worked as desired.
Hope it helps.

How do I convert a WPF size to physical pixels?

What's the best way to convert a WPF (resolution-independent) width and height to physical screen pixels?
I'm showing WPF content in a WinForms Form (via ElementHost) and trying to work out some sizing logic. I've got it working fine when the OS is running at the default 96 dpi. But it won't work when the OS is set to 120 dpi or some other resolution, because then a WPF element that reports its Width as 96 will actually be 120 pixels wide as far as WinForms is concerned.
I couldn't find any "pixels per inch" settings on System.Windows.SystemParameters. I'm sure I could use the WinForms equivalent (System.Windows.Forms.SystemInformation), but is there a better way to do this (read: a way using WPF APIs, rather than using WinForms APIs and manually doing the math)? What's the "best way" to convert WPF "pixels" to real screen pixels?
EDIT: I'm also looking to do this before the WPF control is shown on the screen. It looks like Visual.PointToScreen could be made to give me the right answer, but I can't use it, because the control isn't parented yet and I get InvalidOperationException "This Visual is not connected to a PresentationSource".
Transforming a known size to device pixels
If your visual element is already attached to a PresentationSource (for example, it is part of a window that is visible on screen), the transform is found this way:
var source = PresentationSource.FromVisual(element);
Matrix transformToDevice = source.CompositionTarget.TransformToDevice;
If not, use HwndSource to create a temporary hWnd:
Matrix transformToDevice;
using(var source = new HwndSource(new HwndSourceParameters()))
transformToDevice = source.CompositionTarget.TransformToDevice;
Note that this is less efficient than constructing using a hWnd of IntPtr.Zero but I consider it more reliable because the hWnd created by HwndSource will be attached to the same display device as an actual newly-created Window would. That way, if different display devices have different DPIs you are sure to get the right DPI value.
Once you have the transform, you can convert any size from a WPF size to a pixel size:
var pixelSize = (Size)transformToDevice.Transform((Vector)wpfSize);
Converting the pixel size to integers
If you want to convert the pixel size to integers, you can simply do:
int pixelWidth = (int)pixelSize.Width;
int pixelHeight = (int)pixelSize.Height;
but a more robust solution would be the one used by ElementHost:
int pixelWidth = (int)Math.Max(int.MinValue, Math.Min(int.MaxValue, pixelSize.Width));
int pixelHeight = (int)Math.Max(int.MinValue, Math.Min(int.MaxValue, pixelSize.Height));
Getting the desired size of a UIElement
To get the desired size of a UIElement you need to make sure it is measured. In some circumstances it will already be measured, either because:
You measured it already
You measured one of its ancestors, or
It is part of a PresentationSource (eg it is in a visible Window) and you are executing below DispatcherPriority.Render so you know measurement has already happened automatically.
If your visual element has not been measured yet, you should call Measure on the control or one of its ancestors as appropriate, passing in the available size (or new Size(double.PositivieInfinity, double.PositiveInfinity) if you want to size to content:
element.Measure(availableSize);
Once the measuring is done, all that is necessary is to use the matrix to transform the DesiredSize:
var pixelSize = (Size)transformToDevice.Transform((Vector)element.DesiredSize);
Putting it all together
Here is a simple method that shows how to get the pixel size of an element:
public Size GetElementPixelSize(UIElement element)
{
Matrix transformToDevice;
var source = PresentationSource.FromVisual(element);
if(source!=null)
transformToDevice = source.CompositionTarget.TransformToDevice;
else
using(var source = new HwndSource(new HwndSourceParameters()))
transformToDevice = source.CompositionTarget.TransformToDevice;
if(element.DesiredSize == new Size())
element.Measure(new Size(double.PositiveInfinity, double.PositiveInfinity));
return (Size)transformToDevice.Transform((Vector)element.DesiredSize);
}
Note that in this code I call Measure only if no DesiredSize is present. This provides a convenient method to do everything but has several deficiencies:
It may be that the element's parent would have passed in a smaller availableSize
It is inefficient if the actual DesiredSize is zero (it is remeasured repeatedly)
It may mask bugs in a way that causes the application to fail due to unexpected timing (eg. the code being called at or above DispatchPriority.Render)
Because of these reasons, I would be inclined to omit the Measure call in GetElementPixelSize and just let the client do it.
Simple proportion between Screen.WorkingArea and SystemParameters.WorkArea:
private double PointsToPixels (double wpfPoints, LengthDirection direction)
{
if (direction == LengthDirection.Horizontal)
{
return wpfPoints * Screen.PrimaryScreen.WorkingArea.Width / SystemParameters.WorkArea.Width;
}
else
{
return wpfPoints * Screen.PrimaryScreen.WorkingArea.Height / SystemParameters.WorkArea.Height;
}
}
private double PixelsToPoints(int pixels, LengthDirection direction)
{
if (direction == LengthDirection.Horizontal)
{
return pixels * SystemParameters.WorkArea.Width / Screen.PrimaryScreen.WorkingArea.Width;
}
else
{
return pixels * SystemParameters.WorkArea.Height / Screen.PrimaryScreen.WorkingArea.Height;
}
}
public enum LengthDirection
{
Vertical, // |
Horizontal // ——
}
This works fine with multiple monitors as well.
I found a way to do it, but I don't like it much:
using (var graphics = Graphics.FromHwnd(IntPtr.Zero))
{
var pixelWidth = (int) (element.DesiredSize.Width * graphics.DpiX / 96.0);
var pixelHeight = (int) (element.DesiredSize.Height * graphics.DpiY / 96.0);
// ...
}
I don't like it because (a) it requires a reference to System.Drawing, rather than using WPF APIs; and (b) I have to do the math myself, which means I'm duplicating WPF's implementation details. In .NET 3.5, I have to truncate the result of the calculation to match what ElementHost does with AutoSize=true, but I don't know whether this will still be accurate in future versions of .NET.
This does seem to work, so I'm posting it in case it helps others. But if anyone has a better answer, please, post away.
Just did a quick lookup in the ObjectBrowser and found something quite interesting, you might want to check it out.
System.Windows.Form.AutoScaleMode, it has a property called DPI. Here's the docs, it might be what you are looking for :
public const
System.Windows.Forms.AutoScaleMode Dpi
= 2
Member of System.Windows.Forms.AutoScaleMode
Summary: Controls scale relative to
the display resolution. Common
resolutions are 96 and 120 DPI.
Apply that to your form, it should do the trick.
{enjoy}

Resources