why is metal shader gradient lighter as a SCNProgram applied to a SceneKit Node than it is as a MTKView? - scenekit

I have a gradient, generated by a Metal fragment shader that I've applied to a SCNNode defined by a plane geometry.
It looks like this:
When I use the same shader applied to a MTKView rendered in an Xcode playground, the colors are darker. What is causing the colors to be lighter in the Scenekit version?
Here is the Metal shader and the GameViewController.
Shader:
#include <metal_stdlib>
using namespace metal;
#include <SceneKit/scn_metal>
struct myPlaneNodeBuffer {
float4x4 modelTransform;
float4x4 modelViewTransform;
float4x4 normalTransform;
float4x4 modelViewProjectionTransform;
float2x3 boundingBox;
};
typedef struct {
float3 position [[ attribute(SCNVertexSemanticPosition) ]];
float2 texCoords [[ attribute(SCNVertexSemanticTexcoord0) ]];
} VertexInput;
struct SimpleVertexWithUV
{
float4 position [[position]];
float2 uv;
};
vertex SimpleVertexWithUV gradientVertex(VertexInput in [[ stage_in ]],
constant SCNSceneBuffer& scn_frame [[buffer(0)]],
constant myPlaneNodeBuffer& scn_node [[buffer(1)]])
{
SimpleVertexWithUV vert;
vert.position = scn_node.modelViewProjectionTransform * float4(in.position, 1.0);
int width = abs(scn_node.boundingBox[0].x) + abs(scn_node.boundingBox[1].x);
int height = abs(scn_node.boundingBox[0].y) + abs(scn_node.boundingBox[1].y);
float2 resolution = float2(width,height);
vert.uv = vert.position.xy * 0.5 / resolution;
vert.uv = 0.5 - vert.uv;
return vert;
}
fragment float4 gradientFragment(SimpleVertexWithUV in [[stage_in]],
constant myPlaneNodeBuffer& scn_node [[buffer(1)]])
{
float4 fragColor;
float3 color = mix(float3(1.0, 0.6, 0.1), float3(0.5, 0.8, 1.0), sqrt(1-in.uv.y));
fragColor = float4(color,1);
return(fragColor);
}
Game view controller:
import SceneKit
import QuartzCore
class GameViewController: NSViewController {
#IBOutlet weak var gameView: GameView!
override func awakeFromNib(){
super.awakeFromNib()
// create a new scene
let scene = SCNScene()
// create and add a camera to the scene
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
scene.rootNode.addChildNode(cameraNode)
// place the camera
cameraNode.position = SCNVector3(x: 0, y: 0, z: 15)
// turn off default lighting
self.gameView!.autoenablesDefaultLighting = false
// set the scene to the view
self.gameView!.scene = scene
// allows the user to manipulate the camera
self.gameView!.allowsCameraControl = true
// show statistics such as fps and timing information
self.gameView!.showsStatistics = true
// configure the view
self.gameView!.backgroundColor = NSColor.black
var geometry:SCNGeometry
geometry = SCNPlane(width:10, height:10)
let geometryNode = SCNNode(geometry: geometry)
let program = SCNProgram()
program.fragmentFunctionName = "gradientFragment"
program.vertexFunctionName = "gradientVertex"
let gradientMaterial = SCNMaterial()
gradientMaterial.program = program
geometry.materials = [gradientMaterial]
scene.rootNode.addChildNode(geometryNode)
}
}

As explained in the Advances in SceneKit Rendering session from WWDC 2016, SceneKit now defaults to rendering in linear space which is required to have accurate results from lighting equations.
The difference you see comes from the fact that in the MetalKit case you are providing color components (red, green and blue values) in the sRGB color space, while in the SceneKit case you are providing the exact same components in the linear sRGB color space.
It's up to you to decide which result is the one you want. Either you want a gradient in linear space (that's what you want if you are interpolating some data) or in gamma space (that's what drawing apps use).
If you want a gradient in gamma space, you'll need to convert the color components to be linear because that's what SceneKit works with. Taking the conversion formulas from the Metal Shading Language Specification, here's a solution:
static float srgbToLinear(float c) {
if (c <= 0.04045)
return c / 12.92;
else
return powr((c + 0.055) / 1.055, 2.4);
}
fragment float4 gradientFragment(SimpleVertexWithUV in [[stage_in]],
constant myPlaneNodeBuffer& scn_node [[buffer(1)]])
{
float3 color = mix(float3(1.0, 0.6, 0.1), float3(0.5, 0.8, 1.0), sqrt(1 - in.uv.y));
color.r = srgbToLinear(color.r);
color.g = srgbToLinear(color.g);
color.b = srgbToLinear(color.b);
float4 fragColor = float4(color, 1);
return(fragColor);
}

After learning the root cause of this problem, I did a bit more research on the topic and found another solution. Gamma space rendering can be forced application wide by setting
SCNDisableLinearSpaceRendering to TRUE in the application's plist.

I'm not sure, but it looks to me like your calculation of the size of the node is off, leading your .uv to be off, depending on the position of the node.
You have:
int width = abs(scn_node.boundingBox[0].x) + abs(scn_node.boundingBox[1].x);
int height = abs(scn_node.boundingBox[0].y) + abs(scn_node.boundingBox[1].y);
I would think that should be:
int width = abs(scn_node.boundingBox[0].x - scn_node.boundingBox[1].x);
int height = abs(scn_node.boundingBox[0].y - scn_node.boundingBox[1].y);
You want the absolute difference between the two extremes, not the sum. The sum gets larger as the node moves right and down, because it effectively includes the position.
All of that said, isn't the desired (u, v) already provided to you in in.texCoords?

Related

Screen-aligned quad in clipspace is drawing atop nearer elements

I have a textured quad that needs to be drawn in screenspace, sized/aligned to the viewport, in a SceneKit scene at the far plane - think of it like a textured quad to act as a background fill.
I've manually created a 4 vertex quad in clip space, and written a vertex/fragment shader to draw it. The vertices are set to be at z=1, which should place them at the far plane.
However, while the quad is correctly rendered as expected in screen space (e.g., it is correctly aligned) it is drawn atop all scene contents - with z=1 for the clip space vertices it should be behind everything.
I've been able to work around this by setting a low renderingOrder and disabling depth writes, but that's not a fix, that's a hack.
// build the plane vertices in clip space
let vertices: [SCNVector3] = [
SCNVector3(-1, -1, +1),
SCNVector3(+1, -1, +1),
SCNVector3(+1, +1, +1),
SCNVector3(-1, +1, +1),
]
let texCoords: [CGPoint] = [
CGPoint(x: 0, y: 1),
CGPoint(x: 1, y: 1),
CGPoint(x: 1, y: 0),
CGPoint(x: 0, y: 0),
]
let indices: [UInt16] = [
0,1,2,
0,2,3,
]
let vertexSource = SCNGeometrySource(vertices: vertices)
let texCoordSource = SCNGeometrySource(textureCoordinates: texCoords)
let elementSource = SCNGeometryElement(indices: indices, primitiveType: .triangles)
let planeGeometry = SCNGeometry(sources: [vertexSource, texCoordSource], elements: [elementSource])
let program = SCNProgram()
program.library = ShaderUtilities.metalLibrary
program.vertexFunctionName = "plane_vertex"
program.fragmentFunctionName = "plane_fragment"
if let material = planeGeometry.firstMaterial {
material.program = program
material.setValue(SCNMaterialProperty(contents: backdropImage as Any), forKey: "backgroundTexture")
}
let planeNode = SCNNode(geometry: planeGeometry)
meshCameraNode.addChildNode(planeNode)
The corresponding metal shaders:
struct PlaneVertexIn {
float3 position [[attribute(SCNVertexSemanticPosition)]];
float2 uv [[attribute(SCNVertexSemanticTexcoord0)]];
};
struct PlaneVertexOut {
float4 position [[position]];
float2 uv;
};
vertex PlaneVertexOut plane_vertex(PlaneVertexIn in [[ stage_in ]],
constant SCNSceneBuffer& scn_frame [[buffer(0)]],
constant NodeBuffer& scn_node [[buffer(1)]]) {
PlaneVertexOut out;
// the vertices are already in clip space to form a screen-aligned quad, so no need to apply a transform.
out.position = float4(in.position.x, in.position.y, in.position.z, 1.0);
out.uv = in.uv;
return out;
}
fragment float4 plane_fragment(PlaneVertexOut out [[ stage_in ]],
constant SCNSceneBuffer& scn_frame [[buffer(0)]],
texture2d<float, access::sample> backgroundTexture [[texture(0)]]) {
constexpr sampler textureSampler(coord::normalized, filter::linear, address::repeat);
return backgroundTexture.sample(textureSampler, out.uv);
}

Smart DropShadowEffect

Is it possible to have DropShadowEffect to ignore certain colors when rendering shadow? To have sort of masked (color selective) shadow?
My problem is what shadow can be assigned to whole visual element (graph). It looks like this:
And I want
Notice grid lines without shadow (except 0,0 ones). This can be achieved by having 2 synchronized in zooming/offset graphs, one without shadow effect containing grid and another with shadow containing the rest. But I am not very happy about this solution (I predict lots of problems in the future with that solution). So I'd rather prefer to modify DropShadowEffect somehow.
I can create and use ShaderEffect, but I have no knowledge of how to program shaders to have actual shadow effect (if it can be produced by shaders at all).
Perhaps there is much easier way of doing something with DropShadowEffect itself? Anyone?
Edit
I tried to make shader effect:
sampler2D _input : register(s0);
float _width : register(C0);
float _height : register(C1);
float _depth : register(C2); // shadow depth
float4 main(float2 uv : TEXCOORD) : COLOR
{
// get pixel size
float2 pixel = {1 / _width, 1 / _height};
// find color at offset
float2 offset = float2(uv.x - pixel.x * _depth, uv.y - pixel.y * _depth);
float4 color = tex2D(_input, offset);
// convert to gray?
//float gray = dot(color, float4(0.1, 0.1, 0.1, 0));
//color = float4(gray, gray, gray, 1);
// saturate?
//color = saturate(color);
return tex2D(_input, uv) + color;
}
But fail at everything.
Edit
Here is screenshot of graph appearance, which I like (to those, who try to convince me not to do this):
Currently it is achieved by having special Graph which has template
<Border x:Name="PART_Border" BorderThickness="1" BorderBrush="Gray" CornerRadius="4" Background="White">
<Grid>
<Image x:Name="PART_ImageBack" Stretch="None"/>
<Image x:Name="PART_ImageFront" Stretch="None">
<Image.Effect>
<DropShadowEffect Opacity="0.3"/>
</Image.Effect>
</Image>
</Grid>
</Border>
Everything is rendered onto PART_ImageFront (with shadow), while grid is rendered onto PART_ImageBack (without shadow). Performance-wise it is still good.
I have zero experience with pixel shaders, but here's my quick and dirty attempt at a shadow effect that ignores "uncolored" pixels:
sampler2D _input : register(s0);
float _width : register(C0);
float _height : register(C1);
float _depth : register(C2);
float _opacity : register(C3);
float3 rgb_to_hsv(float3 RGB) {
float r = RGB.x;
float g = RGB.y;
float b = RGB.z;
float minChannel = min(r, min(g, b));
float maxChannel = max(r, max(g, b));
float h = 0;
float s = 0;
float v = maxChannel;
float delta = maxChannel - minChannel;
if (delta != 0) {
s = delta / v;
if (r == v) h = (g - b) / delta;
else if (g == v) h = 2 + (b - r) / delta;
else if (b == v) h = 4 + (r - g) / delta;
}
return float3(h, s, v);
}
float4 main(float2 uv : TEXCOORD) : COLOR {
float width = _width; // 512;
float height = _height; // 512;
float depth = _depth; // 3;
float opacity = _opacity; // 0.25;
float2 pixel = { 1 / width, 1 / height };
float2 offset = float2(uv.x - pixel.x * depth, uv.y - pixel.y * depth);
float4 srcColor = tex2D(_input, offset);
float3 srcHsv = rgb_to_hsv(srcColor);
float4 dstColor = tex2D(_input, uv);
// add shadow for colored pixels only
// tweak saturation threshold as necessary
if (srcHsv.y >= 0.1) {
float gray = dot(srcColor, float4(0.1, 0.1, 0.1, 0.0));
float4 multiplier = float4(gray, gray, gray, opacity * srcColor.a);
return dstColor + (float4(0.1, 0.1, 0.1, 1.0) * multiplier);
}
return dstColor;
}
Here it is in action against a (totally legit) chart that I drew in Blend with the pencil tool:
The shader effect is applied on the root panel containing the axes, grid lines, and series lines, and it generates a shadow only for the series lines.
I don't think it's realistic to expect a shader to be able to apply a shadow to the axes and labels while ignoring the grid lines; the antialiasing on the text is bound to intersect the color/saturation range of the grid lines. I think applying the shadow to just the series lines is cleaner and more aesthetically pleasing anyway.
sampler2D input : register(s0);
float4 main(float2 uv : TEXCOORD) : COLOR {
float4 Color;
Color = tex2D( input , uv.xy);
return Color;
}
This is a basic 'do nothing' shader. the line with the text2D call takes the color that normally would be plotted at the current location. (And in this case simply returns it)
Instead of sampling uv.xy you could add an offset vector to uv.xy and return that color. This would shift the entire image in the direction of the offset vector.
You could combine these two:
sample the uv.xy, if set to a visible color plot that color (this will keep all the lines visible at the right location)
if it is transparent, sample a bit to the top left. if it is set to a color you want to have a shadow: return the shadow color.
Step 2. can be changed into: if it is set to a color you do not want to have a shadow, return a transparent color.
The offset and the colors to test and to use as shadow color could be parameters of the effect.
I strongly suggest to play around with Shazzam it will allow you to test your shader and it will generate the C# code for you.
Note that the uv coordinates are are not in pixels but scaled to 0.0 to 1.0.
Addition
A poor man's blur (anti-aliasing) could be obtained by sampling more pixels around the offset and calculating an average of the colors found that should cause a shadow. this will cause more pixels to receive a shadow.
To calculate the shadow color you could simply darken the existing color by multiplying it with a factor in between 0.0 (black) and 1.0 (original color)
By using the average from the blur you can multiply the shadow color again causing the blur to mix with the original color.
More precise (and expensive) would be to translate the rgb values to hls values and use 'official' darkening formulas to determine the shadow color.

Displaying RGBA images using Image class w/o pre-multiplied alpha

I'm trying to display the separate R, G, B and A channels of a texture based on user input. I'm using an Image class to display textures that have alpha channels. These textures are loaded in to BitmapSource objects and have a Format of Bgra32. The problem is that when I set the Image's Source to the BitmapSource, if I display any combination of the R, G or B channels, I always get pixels that are pre-multiplied by the alpha value. I wrote a really simple shader, pre-compiled it, and used a ShaderEffect class to assign to the Image's Effect property in order to separate and display the separate channels, but apparently, the shader is given the texture after WPF has pre-multiplied the alpha value onto the texture.
Here's the code snippet for setting the Image's Source:
BitmapSource b = MyClass.GetBitmapSource(filepath);
// just as a test, write out the bitmap to file to see if there's an alpha.
// see attached image1
BmpBitmapEncoder test = new BmpBitmapEncoder();
//test.Compression = TiffCompressOption.None;
FileStream stest = new FileStream(#"c:\temp\testbmp2.bmp", FileMode.Create);
test.Frames.Add(BitmapFrame.Create(b));
test.Save(stest);
stest.Close();
// effect is a class derived from ShaderEffect. The pixel shader associated with the
// effect displays the color channel of the texture loaded in the Image object
// depending on which member of the Point4D is set. In this case, we are showing
// the RGB channels but no alpha
effect.PixelMask = new System.Windows.Media.Media3D.Point4D(1.0f, 1.0f, 1.0f, 0.0f);
this.image1.Effect = effect;
this.image1.Source = b;
this.image1.UpdateLayout();
Here's the shader code (its pretty simple, but I figured I'd include it just for completeness):
sampler2D inputImage : register(s0);
float4 channelMasks : register(c0);
float4 main (float2 uv : TEXCOORD) : COLOR0
{
float4 outCol = tex2D(inputImage, uv);
if (!any(channelMasks.rgb - float3(1, 0, 0)))
{
outCol.rgb = float3(outCol.r, outCol.r, outCol.r);
}
else if (!any(channelMasks.rgb - float3(0, 1, 0)))
{
outCol.rgb = float3(outCol.g, outCol.g, outCol.g);
}
else if (!any(channelMasks.rgb - float3(0, 0, 1)))
{
outCol.rgb = float3(outCol.b, outCol.b, outCol.b);
}
else
{
outCol *= channelMasks;
}
if (channelMasks.a == 1.0)
{
outCol.r = outCol.a;
outCol.g = outCol.a;
outCol.b = outCol.a;
}
outCol.a = 1;
return outCol;
}
Here's the output from the code above:
(sorry, i don't have enough reputation points to post images or apparently more than 2 links)
The file save to disk (C:\temp\testbmp2.bmp) in photoshop:
http://screencast.com/t/eeEr5kGgPukz
Image as displayed in my WPF application (using image mask in code snippet above):
http://screencast.com/t/zkK0U5I7P7

Strange square lighting artefacts in OpenGL

I have a program that generates a heightmap and then displays it as a mesh with OpenGL. When I try to add lighting, it ends up with weird square shapes covering the mesh. They are more noticeable in some areas than others, but are always there.
I was using a quad mesh, but nothing changed after switching to a triangle mesh. I've used at least three different methods to calculate the vertex normals, all with the same effect. I was doing the lighting manually with shaders, but nothing changes when using the builtin
OpenGL lighting system.
My latest normal-generating code (faces is an array of indices into verts, the vertex array):
int i;
for (i = 0; i < NINDEX; i += 3) {
vec v[3];
v[0] = verts[faces[i + 0]];
v[1] = verts[faces[i + 1]];
v[2] = verts[faces[i + 2]];
vec v1 = vec_sub(v[1], v[0]);
vec v2 = vec_sub(v[2], v[0]);
vec n = vec_norm(vec_cross(v2, v1));
norms[faces[i + 0]] = vec_add(norms[faces[i + 0]], n);
norms[faces[i + 1]] = vec_add(norms[faces[i + 1]], n);
norms[faces[i + 2]] = vec_add(norms[faces[i + 2]], n);
}
for (i = 0; i < NVERTS; i++) {
norms[i] = vec_norm(norms[i]);
}
Although that isn't the only code I've used, so I doubt that it is the cause of the problem.
I draw the mesh with:
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, verts);
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, 0, norms);
glDrawElements(GL_TRIANGLES, NINDEX, GL_UNSIGNED_SHORT, faces);
And I'm not currently using any shaders.
What could be causing this?
EDIT: A more comprehensive set of screenshots:
Wireframe
Flat shading, OpenGL lighting
Smooth shading, OpenGL lighting
Lighting done in shader
For the last one, the shader code is
Vertex:
varying vec3 lightvec, normal;
void main() {
vec3 lightpos = vec3(0, 0, 100);
vec3 v = vec3(gl_ModelViewMatrix * gl_Vertex);
normal = gl_NormalMatrix * gl_Normal;
lightvec = normalize(lightpos - v);
gl_Position = ftransform();
}
Fragment:
varying vec3 lightvec, normal;
void main(void) {
float l = dot(lightvec, normal);
gl_FragColor = vec4(l, l, l, 1);
}
You need to either normalize the normal in the fragment shader, like so:
varying vec3 lightvec, normal;
void main(void) {
vec3 normalNormed = normalize(normal);
float l = dot(lightvec, normalNormed);
gl_FragColor = vec4(l, l, l, 1);
}
This can be expensive though. What will also work in this case, with directional lights, is to use vertex lighting. So calculate the light value in the vertex shader
varying float lightItensity;
void main() {
vec3 lightpos = vec3(0, 0, 100);
vec3 v = vec3(gl_ModelViewMatrix * gl_Vertex);
normal = gl_NormalMatrix * gl_Normal;
lightvec = normalize(lightpos - v);
lightItensity = dot(normal, lightvec);
gl_Position = ftransform();
}
and use it in the fragment shader,
varying float light;
void main(void) {
float l = light;
gl_FragColor = vec4(l, l, l, 1);
}
I hope this fixes it, let me know if it doesn't.
EDIT: Heres a small diagram that explains what is most likely happening
EDIT2:
If that doesn't help, add more triangles. Interpolate the values of your heightmap and add some vertices in between.
Alternatively, try changing your tesselation scheme. For example a mesh of equilateral triangles like so could make the artifacts less prominent.
You'll have to do some interpolation on your heightmap.
Otherwise I have no idea.. Good luck!
I don't have a definitive answer for the non-shader versions, but I wanted to add that if you're doing per pixel lighting in your fragment shader, you should probably be normalizing the normal and lightvec inside the fragment shader.
If you don't do this they not be unit length (a linear interpolation between two normalized vectors is not necessarily normalized). This could explain some of the artifacts you see in the shader version, as the magnitude of the dot product would vary as a function of the distance from the vertices, which kind of looks like what you're seeing.
EDIT: Another thought, are you doing any non-uniform scaling (different x,y,z) of the mesh when rendering the non-shader version? If you scale it, then you need to either modify the normals by the inverse scale factor, or set glEnable(GL_NORMALIZE). See here for more:
http://www.lighthouse3d.com/tutorials/glsl-tutorial/normalization-issues/

HLSL Shader to Subtract Background Image

I am trying to get an HLSL Pixel Shader for Silverlight to work to subtract the background image from a video image. Can anyone suggest a more sophisticated algorithm than I am using because my algorithm isn't doing it correctly?
float Tolerance : register(C1);
SamplerState ImageSampler : register(S0);
SamplerState BackgroundSampler : register(S1);
struct VS_INPUT
{
float4 Position : POSITION;
float4 Diffuse : COLOR0;
float2 UV0 : TEXCOORD0;
float2 UV1 : TEXCOORD1;
};
struct VS_OUTPUT
{
float4 Position : POSITION;
float4 Color : COLOR0;
float2 UV : TEXCOORD0;
};
float4 PS( VS_OUTPUT input ) : SV_Target
{
float4 color = tex2D( ImageSampler, input.UV );
float4 background = tex2D( BackgroundSampler, input.UV);
if (abs(background.r - color.r) <= Tolerance &&
abs(background.g - color.g) <= Tolerance &&
abs(background.b - color.b) <= Tolerance)
{
color.rgba = 0;
}
return color;
}
To see an example of this, you need a computer with a webcam:
Go to the page http://xmldocs.net/alphavideo/background.html
Press [Start Recording].
Move your body out of the the scene and press [Capture Background].
Then move your body back into the scene and use the slider to adjust the Toleance value to the shader.
EDIT
Single pixel isn't useful for such task, because of noise. So algorithm essence should be to measure similarity between pixel blocks. Recipe pseudo-code (based on correlation measurement):
Divide image into N x M grid
For each N,M cell in grid:
correlation = correlation_between(signal_pixels_of(N,M),
background_pixels_of(N,M)
);
if (correlation > threshold)
show_background_cell(N,M)
else
show_signal_cell(N,M)
This is sequential pseudo code, but it could be easily converted to HLSL shader. Simply each pixel detects to which pixel block it belongs, and after that measures correlation between corresponding blocks. And based on that correlation shows or hides current pixel.
Try this approach,
Good Luck !

Resources