Modify Metal fragment shading based on vertex world position Y coordinate - scenekit

I am trying to use a Metal fragment shader with SCNTechnique to modify the fragment color based on the vertex Y world position.
My understanding so far
SCNTechnique can be configured with a sequence of render passes. A render pass allows for injection of a vertex and a fragment shader. These shaders are written in Metal. The Metal Shading Language Specification describes what inputs/outputs are supported for these two.
The vertex shader is called for every vertex that's being rendered. We can pass additional information from the vertex shader to the fragment shader (like position in 3D space, see MSLS section 5.2).
The fragment shader is closest to a pixel, and might be called multiples time for a single "pixel", if there are multiple triangles that "qualify" for that pixel. (Usually) after fragment shading, a fragment might be discarded if it fails the depth or stencil test.
What I attempted
This is what I attempted. (I hope it makes clear where my understanding is lacking).
struct VertexOut {
float4 position [[position]];
};
vertex VertexOut innerVertexShader(VertexIn in [[stage_in]]) {
VertexOut out;
out.position = in.position;
return out;
};
fragment half4 innerFragmentShader(VertexOut in [[stage_in]],
half4 color [[color(0)]]) {
half4 output;
output = color; // test to see if getting rendered color works
output.g = in.position.y; // test to see if getting y works
return output;
}
These shaders are referenced inside an SCNTechnique dictionary.
[
"passes": [
"innerPass: [
"draw": "DRAW_NODE",
"node": "inner",
"metalVertexShader": "innerVertexShader",
"metalFragmentShader": "innerFragmentShader"
]
],
"sequence": ["innerPass"],
"symbols": [:],
"targets": [:],
]
// ...
let technique = SCNTechnique(dictionary: techniqueDictionary)
This does the following: the technique is instantiated correctly and attached to the scene (because it affects the rendering). But it appears to not apply the camera transform or node position transform to the vertices. And instead renders each node as being viewed from (0,0,1) at position (0,0,0). The colors are wrong. If I remove the shaders from the SCNTechnique, every renders like I would expect.
How can I leverage regular SceneKit behavior (camera transform etc.), and only modify the color output based on the fragments' y world position? I'd expect that needs to happen on a fragment level, using the world position somehow obtained in the vertex shader. I have searched for things like "Metal basic vertex shader" and have come up with naught.
I have seen shaders like this but I'm convinced I should be able to rely on SceneKit rendering for stuff like lighting, PBR materials, camera transforms, etc. At this point I feel like whenever I search for some Metal topic, I end up on the same websites which haven't succeeded yet in taking my understanding to the next level. So, any new/additional resources are appreciated as well.
Background
For the past two months I have been working on my own game project, which uses SceneKit as the main graphics framework. I have turned to SCNTechnique and Metal shaders for custom effects. These last two in particular have given me solid headaches, both on the lack of sample code/documentation/runtime feedback.
I have considered moving to Unity/Unreal or even cancelling this project altogether because of this. But because I'm stubborn and also because I really don't want to port my Swift code to C#/C++, I haven't given up on SceneKit yet.

Having spent the past couple of days investigating this topic, my understanding of vertex and fragment shading and how SceneKit tackles these things has developed significantly.
As #mnuages pointed out in a comment, for this use case shader modifiers are the way to go. They leverage default SceneKit shading (as asked by OP) and allow for shader code injection.
Additional information
To compensate for some of the limitations of SceneKit documentation, I’ll elaborate a bit for other people looking into the subject.
For more information on how the shader modifiers tie into SceneKit default vertex/fragment shaders, see my answer to a related question or SceneKit's default shaders. The second link demonstrates the extent of SceneKit’s rendering logic that you get for free when leveraging shader modifiers instead of writing your own shader.
This page helped me build an understanding of the different stages of vector transforms from vertex to fragment (model space ➡️ world space ➡️ camera space ➡️ projection space).
Alternate approach (custom shader)
If you want to have single pass with a fully customized shader, this is a simple example. It passes the world y position from the vertex shader to the fragment shader.
// Shaders.metal file in your Xcode project
#include <metal_stdlib>
using namespace metal;
#include <SceneKit/scn_metal>
typedef struct {
float4x4 modelTransform;
float4x4 modelViewTransform;
} commonprofile_node;
struct VertexIn {
float3 position [[attribute(SCNVertexSemanticPosition)]];
};
struct VertexOut {
float4 fragmentPosition [[position]];
float height;
};
vertex VertexOut myVertex(
VertexIn in [[stage_in]],
constant SCNSceneBuffer& scn_frame [[buffer(0)]],
constant commonprofile_node& scn_node [[buffer(1)]]
) {
VertexOut out;
float4 position = float4(in.position, 1.f);
out.fragmentPosition = scn_frame.viewProjectionTransform * scn_node.modelTransform * position;
// store world position for fragment shading
out.height = (scn_node.modelTransform * position).y;
return out;
}
fragment half4 myFragment(VertexOut in [[stage_in]]) {
return half4(in.height);
}
let dictionary: [String: Any] = [
"passes" : [
"y" : [
"draw" : "DRAW_SCENE",
"inputs" : [:],
"outputs" : [
"color" : "COLOR"
],
"metalVertexShader": "myVertex",
"metalFragmentShader": "myFragment",
]
],
"sequence" : ["y"],
"symbols" : [:]
]
let technique = SCNTechnique(dictionary: dictionary)
scnView.technique = technique
You could combine this render pass with other passes (see SCNTechnique).

Related

ARSCNPlaneGeometry update and re-calculate texture coordinates, instead of stretching them

I'm having a problem on the texture coordinates of planes geometries being updated by ARKit. Texture images get stretched, I want to avoid that.
Right now I'm detecting horizontal and vertical walls and applying a texture to them. It's working like a charm...
But when the geometry gets updated because it extents the detection of the wall/floor, the texture coordinates get stretched instead of re-mapping, causing the texture to look stretched like image below.
You can also see an un-edited video of the problem happening: https://www.youtube.com/watch?v=wfwYPwzND74
This is the piece of code where the geometry gets updated:
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let planeAnchor = anchor as? ARPlaneAnchor else {
return
}
let planeGeometry = ARSCNPlaneGeometry(device: device)!
planeGeometry.update(from: planeAnchor.geometry)
//I suppose I need to do some texture re-mapping here.
planeGeometry.materials = node.geometry!.materials
node.geometry = planeGeometry
}
I have seen that you can define the texture coordinates by defining it as a source like this:
let textCords = [] //Array of coordinates
let uvData = Data(bytes: textCords, count: textCords.count * MemoryLayout<vector_float2>.size)
let textureSource = SCNGeometrySource(data: uvData,
semantic: .texcoord,
vectorCount: textCords.count,
usesFloatComponents: true,
componentsPerVector: 2,
bytesPerComponent: MemoryLayout<Float>.size,
dataOffset: 0,
dataStride: MemoryLayout<vector_float2>.size)
But I have no idea how to fill the textCords array to make it fit correctly accordingly to the updated planeGeometry
Edit:
Re-defining the approach:
Thinking deeply on the problem, I came with the idea that I need to modify the texture's transform to fix the stretching part, but then I have two options if I do that:
Either keep the texture big enough to fill the entire geometry but
keeping a ratio of 1:1 to avoid stretching
Or keep the texture the
original size but with 1:1 aspect ratio and repeat the texture
multiple times to fit the entire geometry.
Any of these approaches I'm still lost of how to do it. What would you suggest?

What is SceneKit doing between calls to didApplyConstraints and willRenderScene?

The SceneKit rendering loop is well documented here https://developer.apple.com/documentation/scenekit/scnscenerendererdelegate and here https://www.raywenderlich.com/1257-scene-kit-tutorial-with-swift-part-4-render-loop. However neither of these documents explains what SceneKit does between calls to didApplyConstraints and willRenderScene.
I've modified my SCNSceneRendererDelegate to measure the time between each call and I can see that around 5ms elapses between those two calls. It isn't running my code in that time, but presumably some aspect of the way I've set up my scene is creating work which has to be done there. Any insight into what SceneKit is doing would be very helpful.
I am calling SceneKit myself from an MTKView's draw call (rather than using an SCNView) so that I can render the scene twice. The first render is normal, the second uses the depth buffer from the first but draws just a subset of the scene that I want to "glow" onto a separate colour buffer. That colour buffer is then scaled down, gaussian blurred, scaled back up and then blended over the top of the first scene (all with custom Metal shaders).
The 5ms spent between didApplyConstraints and willRenderScene started happening when I introduced this extra rendering pass. To control which nodes are in each scene I switch the opacity of a small number of parent nodes between 0 and 1. If I remove the code which switches opacity but keep everything else (so there are two rendering passes but they both draw everything) the extra 5ms disappears and the overall frame rate is actually faster even though much more rendering is happening.
I'm writing Swift targeting MacOS on a 2018 MacBook Pro.
UPDATE: mnuages has explained that changing the opacity causes SceneKit to rebuild the scene graph and it that explains part of the lost time. However I've now discovered that my use of a custom SCNProgram for the nodes in one rendering pass also triggers a 5ms pause between didApplyConstraints and willRenderScene. Does anyone know why this might be?
Here is my code for setting up the SCNProgram and the SCNMaterial, both done once:
let device = MTLCreateSystemDefaultDevice()!
let library = device.makeDefaultLibrary()
glowProgram = SCNProgram()
glowProgram.library = library
glowProgram.vertexFunctionName = "emissionGlowVertex"
glowProgram.fragmentFunctionName = "emissionGlowFragment"
...
let glowMaterial = SCNMaterial()
glowMaterial.program = glowProgram
let emissionImageProperty = SCNMaterialProperty(contents: emissionImage)
glowMaterial.setValue(emissionImageProperty, forKey: "tex")
Here's where I apply the material to the nodes:
let nodeWithGeometryClone = nodeWithGeometry.clone()
nodeWithGeometryClone.categoryBitMask = 2
let geometry = nodeWithGeometryClone.geometry!
nodeWithGeometryClone.geometry = SCNGeometry(sources: geometry.sources, elements: geometry.elements)
glowNode.addChildNode(nodeWithGeometryClone)
nodeWithGeometryClone.geometry!.firstMaterial = glowMaterial
The glow nodes are a deep clone of the regular nodes, but with an alternative SCNProgram. Here's the Metal code:
#include <metal_stdlib>
using namespace metal;
#include <SceneKit/scn_metal>
struct NodeConstants {
float4x4 modelTransform;
float4x4 modelViewProjectionTransform;
};
struct EmissionGlowVertexIn {
float3 pos [[attribute(SCNVertexSemanticPosition)]];
float2 uv [[attribute(SCNVertexSemanticTexcoord0)]];
};
struct EmissionGlowVertexOut {
float4 pos [[position]];
float2 uv;
};
vertex EmissionGlowVertexOut emissionGlowVertex(EmissionGlowVertexIn in [[stage_in]],
constant NodeConstants &scn_node [[buffer(1)]]) {
EmissionGlowVertexOut out;
out.pos = scn_node.modelViewProjectionTransform * float4(in.pos, 1) + float4(0, 0, -0.01, 0);
out.uv = in.uv;
return out;
}
constexpr sampler linSamp = sampler(coord::normalized, address::clamp_to_zero, filter::linear);
fragment half4 emissionGlowFragment(EmissionGlowVertexOut in [[stage_in]],
texture2d<half, access::sample> tex [[texture(0)]]) {
return tex.sample(linSamp, in.uv);
}
By changing the opacity of nodes you're invalidating parts of the scene graph which can result in additional work for the renderer.
It would be interesting to see if setting the camera's categoryBitMask is more performant (it doesn't modify the scene graph).

How to color a scnplane with 2 different materials?

I have a SCNPlane that I created in the SceneKit editor and I want 1 side of the plane to have a certain image and the other side of the plane to have another image. How do I do that in the Scenekit editor
So far what I've tried to do is adding 2 materials to the plane. I tried adding 2 materials and unchecking double-sided but that doesn't work.
Any help would be appreciated!
Per the SCNPlane docs:
The surface is one-sided. Its surface normal vectors point in the positive z-axis direction of its local coordinate space, so it is only visible from that direction by default. To render both sides of a plane, ether set the isDoubleSided property of its material to true or create two plane geometries and orient them back to back.
That implies a plane has only one material — isDoubleSided is a property of a material, letting that one material render on both sides of a surface, but there's nothing you can do to one material to turn it into two.
If you want a flat surface with two materials, you can arrange two planes back to back as the doc suggests. Make them both children of a containing node and you can then use that to move them together. Or you could perhaps make an SCNBox that's very thin in one dimension.
Very easy to do in 2022.
It's very easy and common to do this, you just add the rear as a child.
To be clear the node (and the rear you add) should both use the single-sided shader.
Obviously, the rear you add points in the other direction!
Do note that they are indeed in "exactly the same place". Sometimes folks new to 3D mesh think the two meshes would need to be "a little apart", not so.
public var rear = SCNNode()
private var theRearPlane = SCNPlane()
private func addRear() {
addChildNode(rear)
rear.eulerAngles = SCNVector3(0, CGFloat.pi, 0)
theRearPlane. ... set width, height etc
theRearPlane.firstMaterial?.isDoubleSided = false
rear.geometry = theRearPlane
rear.geometry?.firstMaterial!.diffuse.contents = .. your rear image/etc
}
So ...
///Double-sided sprite
class SCNTwoSidedNode: SCNNode {
public var rear = SCNNode()
private var thePlane = SCNPlane()
override init() {
super.init()
thePlane. .. set size, etc
thePlane.firstMaterial?.isDoubleSided = false
thePlane.firstMaterial?.transparencyMode = .aOne
geometry = thePlane
addRear()
}
Consuming code can just refer to .rear , for example,
playerNode. ... the drawing of the Druid
playerNode.rear. ... Druid rules and abilities text
enemyNode. ... the drawing of the Mage
enemyNode.rear. ... Mage rules and abilities text
If you want to do this in the visual editor - very easy
It's trivial. Simply add the rear as a child. Rotate the child 180 degrees on Y.
It's that easy.
Make them both single-sided and put anything you want on the front and rear.
Simply move the main one (the front) normally and everything works.

OpenGL model, view, projection matrices

I am trying to understand cameras in opengl that use matrices.
I've written a simple shader that looks like this:
#version 330 core
layout (location = 0) in vec3 a_pos;
layout (location = 1) in vec4 a_col;
uniform mat4 u_mvp_mat;
uniform mat4 u_mod_mat;
uniform mat4 u_view_mat;
uniform mat4 u_proj_mat;
out vec4 f_color;
void main()
{
vec4 v = u_mvp_mat * vec4(0.0, 0.0, 1.0, 1.0);
gl_Position = u_mvp_mat * vec4(a_pos, 1.0);
//gl_Position = u_proj_mat * u_view_mat * u_mod_mat * vec4(a_pos, 1.0);
f_color = a_col;
}
It's a bit verbose but that's because I am testing passing in either the model, view or projection matrices and doing the multiplication on the gpu or doing the multiplication on the cpu and passing in the mvp matrix and then just doing the mvp * position matrix multiplication.
I understand that the later one can offer performance increase but drawing 1 quad I don't really see any issues with performance at this point.
Right now I use this code to get the locations from my shader and create the model view and projection matrices.
pos_loc = get_attrib_location(ce_get_default_shader(), "a_pos");
col_loc = get_attrib_location(ce_get_default_shader(), "a_col");
mvp_matrix_loc = get_uniform_location(ce_get_default_shader(), "u_mvp_mat");
model_mat_loc = get_uniform_location(ce_get_default_shader(), "u_mod_mat");
view_mat_loc = get_uniform_location(ce_get_default_shader(), "u_view_mat");
proj_matrix_loc =
get_uniform_location(ce_get_default_shader(), "u_proj_mat");
float h_w = (float)ce_get_width() * 0.5f; //width = 320
float h_h = (float)ce_get_height() * 0.5f; //height = 480
model_mat = mat4_identity();
view_mat = mat4_identity();
proj_mat = mat4_identity();
point3* eye = point3_new(0, 0, 0);
point3* center = point3_new(0, 0, -1);
vec3* up = vec3_new(0, 1, 0);
mat4_look_at(view_mat, eye, center, up);
mat4_translate(view_mat, h_w, h_h, -20);
mat4_ortho(proj_mat, 0, ce_get_width(), 0, ce_get_height(), 1, 100);
mat4_scale(model_mat, 30, 30, 1);
mvp_mat = mat4_identity();
after this I setup my vao and vbo's then get ready to do rendering.
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(ce_get_default_shader()->shader_program);
glBindVertexArray(vao);
mvp_mat = mat4_multi(mvp_mat, view_mat, model_mat);
mvp_mat = mat4_multi(mvp_mat, proj_mat, mvp_mat);
glUniformMatrix4fv(mvp_matrix_loc, 1, GL_FALSE, mat4_get_data(mvp_mat));
glUniformMatrix4fv(model_mat_loc, 1, GL_FALSE, mat4_get_data(model_mat));
glUniformMatrix4fv(view_mat_loc, 1, GL_FALSE, mat4_get_data(view_mat));
glUniformMatrix4fv(proj_matrix_loc, 1, GL_FALSE, mat4_get_data(proj_mat));
glDrawElements(GL_TRIANGLES, quad->vertex_count, GL_UNSIGNED_SHORT, 0);
glBindVertexArray(0);
Assuming that all the matrix math is correct, I would like to abstract view and projection matrix out into a camera struct as well as the model matrix into a sprite struct so that I can avoid all this matrix math and make things easier to use.
The matrix multiplication order is:
Projection * View * Model * Vector
so the camera would hold the projection and view matrices while the sprite holds the model matrix.
Do all your camera transformations and your sprite transformations then right before you send the data to the gpu you do your matrix multiplications.
If I remember correctly matrix multiplication isn't commutative so doing
view * projection * model will result in the wrong resulting matrix.
pseudo code
glClearxxx(....);
glUseProgram(..);
glBindVertexArray(..);
mvp_mat = mat4_identity();
proj_mat = camera_get_proj_mat();
view_mat = camera_get_view_mat();
mod_mat = sprite_get_transform_mat();
mat4_multi(mvp_mat, view_mat, mod_mat); //mvp holds model * view
mat4_multi(mvp_mat, proj_mat, mvp_mat); //mvp holds proj * model * view
glUniformMatrix4fv(mvp_mat, 1, GL_FALSE, mat4_get_data(mvp_mat));
glDrawElements(...);
glBindVertexArray(0);
Is that a performant way to go about doing this that is scalable?
Is that a performant way to go about doing this that is scalable?
Yes, unless you have a very exotic use case of some sort which is very unlike the norm.
The last thing you should typically ever be worrying about is with respect to the performance of retrieving a modelview and projection matrix out of a camera.
It's because those matrices typically only need to be fetched once per frame per viewport. There's millions of iterations worth of other work that could occur in a frame while scanline-rasterizing primitives, and pulling matrices out of a camera is just a simple constant-time operation.
So typically you want to just make it as convenient as you like. In my case, I go all the way through an abstract interface of function pointers in a central SDK, at which point the functions then compute the proj/mv/ti_mv matrix on the fly out of user-defined properties associated with the camera. In spite of this, it never shows up as a hotspot -- it doesn't even show up in the profiler at all.
There's far more expensive things to worry about. Scalability implies scale -- the complexity of retrieving matrices out of camera doesn't scale. The number of triangles or quads or lines or other primitives you want to render could scale, the number of fragments processed in a frag shader can scale. Cameras typically don't scale except with respect to the number of viewports, and no one should ever have use for a million viewports.
I haven't checked that bit-wise, but it generally looks ok what you're doing.
I would like to abstract view and projection matrix out into a camera struct
That's a most appropriate idea; I can hardly imagine a serious GL application without such an abstraction
Is that a performant way to go about doing this that is scalable?
General constraints of scalability are
diffuse and specular BRDFs (which also require, btw, a light uniform, a normal attribute and calculation of a normal matrix if the scaling of the model is non-uniform) and need per-pixel illumination for quality rendering.
same with multiple lights (e.g. the sun and a close spotlight)
shadow maps! shadow maps? (one for each light-source?)
transparency
reflections (mirrors, glass, water)
textures
As you may take it from the list, you will not get very far with just an MVP uniform and a vertex coordinate attribute.
But the mere number of uniforms is by far not the most crucial points for performance - seeing your code I'm positive that you will not recompile your shaders unnecessarily, update your uniforms only if needed, use Uniform Buffer Objects etc..
The issue is the data that is plugged into those uniforms and VBOs. Or not.
Consider humanoid mesh "Alice" running (that's a mesh morph + translation) across a city square on a windy (water will have ripples) evening (more than one relevant light source), passing a fountain.
Lets' consider we collect it all for by all means on the CPU and old-school only plug ready-to render data into the shaders:
Alice's mesh is morphed, thus her VBOs need an update
Alice's mesh will move; thus all affected shadow maps will need an update (OK, given. they are generated by shadow illumination loops on the GPU, but if you do the wrong way you will shove a lot of data around)
Alice's reflection in the fountain will come and go
Alice's hair will be swirled - the CPU may have quiet a busy time, to say the least
(in fact the latter is so difficult that you will hardly see any halfway-realistic real-time long open hair animation, but amazingly (no, not really) many pony-tails and short hair cuts)
And we've not yet talked about Alice's attire; let's just hope she's wearing a t-shirt and a jeans (not wide shirt and a skirt, which would require fall-of-the-folds and collision calculations).
As you may have guessed that old-school approach doesn't take us far and thus, there is a fit to be found between between CPU and GPU operations.
In addition, one should think about parallelization of calculations at an early stage. It is advantageous to have the data as flat as possible in chunks as large as reasonable, so one just puts a pointer and size into a gl-call and bids that data farewell without any copying, re-arranging, looping or further ado.
That's my 2 cents of wisdom for today about GL performance and scalability.

WPF trying to use pixelshader to disable certain channels

I'm trying to use a pixel shader to disable specific channels on an image. Unfortunately, I can't seem to get my shader to work, nor do I know how to do step-through debugging on this. I've tried PIX for windows, but haven't gotten any success with that tool.
Here's my shader file: ChannelEffect .fx
sampler2D implicitInputSampler : register(S0);
float4 main(float2 uv : TEXCOORD) : COLOR
{
// Get the source color
float4 color = tex2D(implicitInputSampler, uv);
color.g = 0.0f;
color.b = 0.0f;
// Return new color
return color;
}
Right now I'm hard-coding the channels I'm disabling, just to test it. This sample should make only the red channel appear.
ChannelEffect channelEffect = GetChannelEffect(displayChannel);
image.Effect = channelEffect;
dc.DrawImage(image.Source, destRect);
The end result I get is that the image renders as normal. Its as if I'm not even applying a shader at all. Any ideas?
I was doing something similar- and found Shazzam which is a fantastic program that not only will allow you to tweak and twiddle with shaders but will even generate code for you. I didn't use that code myself, but it gives a great example on how to use shaders with C# and XAML.
You can even import your own images to test with until you get your shader code exactly right.

Resources