HLSL Shader to Subtract Background Image - silverlight

I am trying to get an HLSL Pixel Shader for Silverlight to work to subtract the background image from a video image. Can anyone suggest a more sophisticated algorithm than I am using because my algorithm isn't doing it correctly?
float Tolerance : register(C1);
SamplerState ImageSampler : register(S0);
SamplerState BackgroundSampler : register(S1);
struct VS_INPUT
{
float4 Position : POSITION;
float4 Diffuse : COLOR0;
float2 UV0 : TEXCOORD0;
float2 UV1 : TEXCOORD1;
};
struct VS_OUTPUT
{
float4 Position : POSITION;
float4 Color : COLOR0;
float2 UV : TEXCOORD0;
};
float4 PS( VS_OUTPUT input ) : SV_Target
{
float4 color = tex2D( ImageSampler, input.UV );
float4 background = tex2D( BackgroundSampler, input.UV);
if (abs(background.r - color.r) <= Tolerance &&
abs(background.g - color.g) <= Tolerance &&
abs(background.b - color.b) <= Tolerance)
{
color.rgba = 0;
}
return color;
}
To see an example of this, you need a computer with a webcam:
Go to the page http://xmldocs.net/alphavideo/background.html
Press [Start Recording].
Move your body out of the the scene and press [Capture Background].
Then move your body back into the scene and use the slider to adjust the Toleance value to the shader.

EDIT
Single pixel isn't useful for such task, because of noise. So algorithm essence should be to measure similarity between pixel blocks. Recipe pseudo-code (based on correlation measurement):
Divide image into N x M grid
For each N,M cell in grid:
correlation = correlation_between(signal_pixels_of(N,M),
background_pixels_of(N,M)
);
if (correlation > threshold)
show_background_cell(N,M)
else
show_signal_cell(N,M)
This is sequential pseudo code, but it could be easily converted to HLSL shader. Simply each pixel detects to which pixel block it belongs, and after that measures correlation between corresponding blocks. And based on that correlation shows or hides current pixel.
Try this approach,
Good Luck !

Related

why is metal shader gradient lighter as a SCNProgram applied to a SceneKit Node than it is as a MTKView?

I have a gradient, generated by a Metal fragment shader that I've applied to a SCNNode defined by a plane geometry.
It looks like this:
When I use the same shader applied to a MTKView rendered in an Xcode playground, the colors are darker. What is causing the colors to be lighter in the Scenekit version?
Here is the Metal shader and the GameViewController.
Shader:
#include <metal_stdlib>
using namespace metal;
#include <SceneKit/scn_metal>
struct myPlaneNodeBuffer {
float4x4 modelTransform;
float4x4 modelViewTransform;
float4x4 normalTransform;
float4x4 modelViewProjectionTransform;
float2x3 boundingBox;
};
typedef struct {
float3 position [[ attribute(SCNVertexSemanticPosition) ]];
float2 texCoords [[ attribute(SCNVertexSemanticTexcoord0) ]];
} VertexInput;
struct SimpleVertexWithUV
{
float4 position [[position]];
float2 uv;
};
vertex SimpleVertexWithUV gradientVertex(VertexInput in [[ stage_in ]],
constant SCNSceneBuffer& scn_frame [[buffer(0)]],
constant myPlaneNodeBuffer& scn_node [[buffer(1)]])
{
SimpleVertexWithUV vert;
vert.position = scn_node.modelViewProjectionTransform * float4(in.position, 1.0);
int width = abs(scn_node.boundingBox[0].x) + abs(scn_node.boundingBox[1].x);
int height = abs(scn_node.boundingBox[0].y) + abs(scn_node.boundingBox[1].y);
float2 resolution = float2(width,height);
vert.uv = vert.position.xy * 0.5 / resolution;
vert.uv = 0.5 - vert.uv;
return vert;
}
fragment float4 gradientFragment(SimpleVertexWithUV in [[stage_in]],
constant myPlaneNodeBuffer& scn_node [[buffer(1)]])
{
float4 fragColor;
float3 color = mix(float3(1.0, 0.6, 0.1), float3(0.5, 0.8, 1.0), sqrt(1-in.uv.y));
fragColor = float4(color,1);
return(fragColor);
}
Game view controller:
import SceneKit
import QuartzCore
class GameViewController: NSViewController {
#IBOutlet weak var gameView: GameView!
override func awakeFromNib(){
super.awakeFromNib()
// create a new scene
let scene = SCNScene()
// create and add a camera to the scene
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
scene.rootNode.addChildNode(cameraNode)
// place the camera
cameraNode.position = SCNVector3(x: 0, y: 0, z: 15)
// turn off default lighting
self.gameView!.autoenablesDefaultLighting = false
// set the scene to the view
self.gameView!.scene = scene
// allows the user to manipulate the camera
self.gameView!.allowsCameraControl = true
// show statistics such as fps and timing information
self.gameView!.showsStatistics = true
// configure the view
self.gameView!.backgroundColor = NSColor.black
var geometry:SCNGeometry
geometry = SCNPlane(width:10, height:10)
let geometryNode = SCNNode(geometry: geometry)
let program = SCNProgram()
program.fragmentFunctionName = "gradientFragment"
program.vertexFunctionName = "gradientVertex"
let gradientMaterial = SCNMaterial()
gradientMaterial.program = program
geometry.materials = [gradientMaterial]
scene.rootNode.addChildNode(geometryNode)
}
}
As explained in the Advances in SceneKit Rendering session from WWDC 2016, SceneKit now defaults to rendering in linear space which is required to have accurate results from lighting equations.
The difference you see comes from the fact that in the MetalKit case you are providing color components (red, green and blue values) in the sRGB color space, while in the SceneKit case you are providing the exact same components in the linear sRGB color space.
It's up to you to decide which result is the one you want. Either you want a gradient in linear space (that's what you want if you are interpolating some data) or in gamma space (that's what drawing apps use).
If you want a gradient in gamma space, you'll need to convert the color components to be linear because that's what SceneKit works with. Taking the conversion formulas from the Metal Shading Language Specification, here's a solution:
static float srgbToLinear(float c) {
if (c <= 0.04045)
return c / 12.92;
else
return powr((c + 0.055) / 1.055, 2.4);
}
fragment float4 gradientFragment(SimpleVertexWithUV in [[stage_in]],
constant myPlaneNodeBuffer& scn_node [[buffer(1)]])
{
float3 color = mix(float3(1.0, 0.6, 0.1), float3(0.5, 0.8, 1.0), sqrt(1 - in.uv.y));
color.r = srgbToLinear(color.r);
color.g = srgbToLinear(color.g);
color.b = srgbToLinear(color.b);
float4 fragColor = float4(color, 1);
return(fragColor);
}
After learning the root cause of this problem, I did a bit more research on the topic and found another solution. Gamma space rendering can be forced application wide by setting
SCNDisableLinearSpaceRendering to TRUE in the application's plist.
I'm not sure, but it looks to me like your calculation of the size of the node is off, leading your .uv to be off, depending on the position of the node.
You have:
int width = abs(scn_node.boundingBox[0].x) + abs(scn_node.boundingBox[1].x);
int height = abs(scn_node.boundingBox[0].y) + abs(scn_node.boundingBox[1].y);
I would think that should be:
int width = abs(scn_node.boundingBox[0].x - scn_node.boundingBox[1].x);
int height = abs(scn_node.boundingBox[0].y - scn_node.boundingBox[1].y);
You want the absolute difference between the two extremes, not the sum. The sum gets larger as the node moves right and down, because it effectively includes the position.
All of that said, isn't the desired (u, v) already provided to you in in.texCoords?

Smart DropShadowEffect

Is it possible to have DropShadowEffect to ignore certain colors when rendering shadow? To have sort of masked (color selective) shadow?
My problem is what shadow can be assigned to whole visual element (graph). It looks like this:
And I want
Notice grid lines without shadow (except 0,0 ones). This can be achieved by having 2 synchronized in zooming/offset graphs, one without shadow effect containing grid and another with shadow containing the rest. But I am not very happy about this solution (I predict lots of problems in the future with that solution). So I'd rather prefer to modify DropShadowEffect somehow.
I can create and use ShaderEffect, but I have no knowledge of how to program shaders to have actual shadow effect (if it can be produced by shaders at all).
Perhaps there is much easier way of doing something with DropShadowEffect itself? Anyone?
Edit
I tried to make shader effect:
sampler2D _input : register(s0);
float _width : register(C0);
float _height : register(C1);
float _depth : register(C2); // shadow depth
float4 main(float2 uv : TEXCOORD) : COLOR
{
// get pixel size
float2 pixel = {1 / _width, 1 / _height};
// find color at offset
float2 offset = float2(uv.x - pixel.x * _depth, uv.y - pixel.y * _depth);
float4 color = tex2D(_input, offset);
// convert to gray?
//float gray = dot(color, float4(0.1, 0.1, 0.1, 0));
//color = float4(gray, gray, gray, 1);
// saturate?
//color = saturate(color);
return tex2D(_input, uv) + color;
}
But fail at everything.
Edit
Here is screenshot of graph appearance, which I like (to those, who try to convince me not to do this):
Currently it is achieved by having special Graph which has template
<Border x:Name="PART_Border" BorderThickness="1" BorderBrush="Gray" CornerRadius="4" Background="White">
<Grid>
<Image x:Name="PART_ImageBack" Stretch="None"/>
<Image x:Name="PART_ImageFront" Stretch="None">
<Image.Effect>
<DropShadowEffect Opacity="0.3"/>
</Image.Effect>
</Image>
</Grid>
</Border>
Everything is rendered onto PART_ImageFront (with shadow), while grid is rendered onto PART_ImageBack (without shadow). Performance-wise it is still good.
I have zero experience with pixel shaders, but here's my quick and dirty attempt at a shadow effect that ignores "uncolored" pixels:
sampler2D _input : register(s0);
float _width : register(C0);
float _height : register(C1);
float _depth : register(C2);
float _opacity : register(C3);
float3 rgb_to_hsv(float3 RGB) {
float r = RGB.x;
float g = RGB.y;
float b = RGB.z;
float minChannel = min(r, min(g, b));
float maxChannel = max(r, max(g, b));
float h = 0;
float s = 0;
float v = maxChannel;
float delta = maxChannel - minChannel;
if (delta != 0) {
s = delta / v;
if (r == v) h = (g - b) / delta;
else if (g == v) h = 2 + (b - r) / delta;
else if (b == v) h = 4 + (r - g) / delta;
}
return float3(h, s, v);
}
float4 main(float2 uv : TEXCOORD) : COLOR {
float width = _width; // 512;
float height = _height; // 512;
float depth = _depth; // 3;
float opacity = _opacity; // 0.25;
float2 pixel = { 1 / width, 1 / height };
float2 offset = float2(uv.x - pixel.x * depth, uv.y - pixel.y * depth);
float4 srcColor = tex2D(_input, offset);
float3 srcHsv = rgb_to_hsv(srcColor);
float4 dstColor = tex2D(_input, uv);
// add shadow for colored pixels only
// tweak saturation threshold as necessary
if (srcHsv.y >= 0.1) {
float gray = dot(srcColor, float4(0.1, 0.1, 0.1, 0.0));
float4 multiplier = float4(gray, gray, gray, opacity * srcColor.a);
return dstColor + (float4(0.1, 0.1, 0.1, 1.0) * multiplier);
}
return dstColor;
}
Here it is in action against a (totally legit) chart that I drew in Blend with the pencil tool:
The shader effect is applied on the root panel containing the axes, grid lines, and series lines, and it generates a shadow only for the series lines.
I don't think it's realistic to expect a shader to be able to apply a shadow to the axes and labels while ignoring the grid lines; the antialiasing on the text is bound to intersect the color/saturation range of the grid lines. I think applying the shadow to just the series lines is cleaner and more aesthetically pleasing anyway.
sampler2D input : register(s0);
float4 main(float2 uv : TEXCOORD) : COLOR {
float4 Color;
Color = tex2D( input , uv.xy);
return Color;
}
This is a basic 'do nothing' shader. the line with the text2D call takes the color that normally would be plotted at the current location. (And in this case simply returns it)
Instead of sampling uv.xy you could add an offset vector to uv.xy and return that color. This would shift the entire image in the direction of the offset vector.
You could combine these two:
sample the uv.xy, if set to a visible color plot that color (this will keep all the lines visible at the right location)
if it is transparent, sample a bit to the top left. if it is set to a color you want to have a shadow: return the shadow color.
Step 2. can be changed into: if it is set to a color you do not want to have a shadow, return a transparent color.
The offset and the colors to test and to use as shadow color could be parameters of the effect.
I strongly suggest to play around with Shazzam it will allow you to test your shader and it will generate the C# code for you.
Note that the uv coordinates are are not in pixels but scaled to 0.0 to 1.0.
Addition
A poor man's blur (anti-aliasing) could be obtained by sampling more pixels around the offset and calculating an average of the colors found that should cause a shadow. this will cause more pixels to receive a shadow.
To calculate the shadow color you could simply darken the existing color by multiplying it with a factor in between 0.0 (black) and 1.0 (original color)
By using the average from the blur you can multiply the shadow color again causing the blur to mix with the original color.
More precise (and expensive) would be to translate the rgb values to hls values and use 'official' darkening formulas to determine the shadow color.

Displaying RGBA images using Image class w/o pre-multiplied alpha

I'm trying to display the separate R, G, B and A channels of a texture based on user input. I'm using an Image class to display textures that have alpha channels. These textures are loaded in to BitmapSource objects and have a Format of Bgra32. The problem is that when I set the Image's Source to the BitmapSource, if I display any combination of the R, G or B channels, I always get pixels that are pre-multiplied by the alpha value. I wrote a really simple shader, pre-compiled it, and used a ShaderEffect class to assign to the Image's Effect property in order to separate and display the separate channels, but apparently, the shader is given the texture after WPF has pre-multiplied the alpha value onto the texture.
Here's the code snippet for setting the Image's Source:
BitmapSource b = MyClass.GetBitmapSource(filepath);
// just as a test, write out the bitmap to file to see if there's an alpha.
// see attached image1
BmpBitmapEncoder test = new BmpBitmapEncoder();
//test.Compression = TiffCompressOption.None;
FileStream stest = new FileStream(#"c:\temp\testbmp2.bmp", FileMode.Create);
test.Frames.Add(BitmapFrame.Create(b));
test.Save(stest);
stest.Close();
// effect is a class derived from ShaderEffect. The pixel shader associated with the
// effect displays the color channel of the texture loaded in the Image object
// depending on which member of the Point4D is set. In this case, we are showing
// the RGB channels but no alpha
effect.PixelMask = new System.Windows.Media.Media3D.Point4D(1.0f, 1.0f, 1.0f, 0.0f);
this.image1.Effect = effect;
this.image1.Source = b;
this.image1.UpdateLayout();
Here's the shader code (its pretty simple, but I figured I'd include it just for completeness):
sampler2D inputImage : register(s0);
float4 channelMasks : register(c0);
float4 main (float2 uv : TEXCOORD) : COLOR0
{
float4 outCol = tex2D(inputImage, uv);
if (!any(channelMasks.rgb - float3(1, 0, 0)))
{
outCol.rgb = float3(outCol.r, outCol.r, outCol.r);
}
else if (!any(channelMasks.rgb - float3(0, 1, 0)))
{
outCol.rgb = float3(outCol.g, outCol.g, outCol.g);
}
else if (!any(channelMasks.rgb - float3(0, 0, 1)))
{
outCol.rgb = float3(outCol.b, outCol.b, outCol.b);
}
else
{
outCol *= channelMasks;
}
if (channelMasks.a == 1.0)
{
outCol.r = outCol.a;
outCol.g = outCol.a;
outCol.b = outCol.a;
}
outCol.a = 1;
return outCol;
}
Here's the output from the code above:
(sorry, i don't have enough reputation points to post images or apparently more than 2 links)
The file save to disk (C:\temp\testbmp2.bmp) in photoshop:
http://screencast.com/t/eeEr5kGgPukz
Image as displayed in my WPF application (using image mask in code snippet above):
http://screencast.com/t/zkK0U5I7P7

HLSL hue change algorithm

I want to make a pixel shader for Silverlight which will help me to change the Hue/Saturation/Lightness using a slider.
* Hue slider has values in range: [-180, 180]
* Saturation slider has values in range: [-100, 100]
* Lightness slider has values in range: [-100, 100]
I managed to create a pixel shader which can manipulate the Saturation and Lightness values.
But I can find any algorithm for changing the hue value.
Can anyone provide me an algorithm? Thank you.
Here is my HLSL code:
/// <summary>The brightness offset.</summary>
/// <minValue>-180</minValue>
/// <maxValue>180</maxValue>
/// <defaultValue>0</defaultValue>
float Hue : register(C0);
/// <summary>The saturation offset.</summary>
/// <minValue>-100</minValue>
/// <maxValue>100</maxValue>
/// <defaultValue>0</defaultValue>
float Saturation : register(C1);
/// <summary>The lightness offset.</summary>
/// <minValue>-100</minValue>
/// <maxValue>100</maxValue>
/// <defaultValue>0</defaultValue>
float Lightness : register(C2);
sampler2D input : register(S0);
//--------------------------------------------------------------------------------------
// Pixel Shader
//--------------------------------------------------------------------------------------
float4 main(float2 uv : TEXCOORD) : COLOR
{
// some vars
float saturation = Saturation / 100 + 1;
float lightness = Lightness / 100;
float3 luminanceWeights = float3(0.299,0.587,0.114);
// input raw pixel
float4 srcPixel = tex2D(input, uv);
// Apply saturation
float luminance = dot(srcPixel, luminanceWeights);
float4 dstPixel = lerp(luminance, srcPixel, saturation);
// Apply lightness
dstPixel.rgb += lightness;
//retain the incoming alpha
dstPixel.a = srcPixel.a;
return dstPixel;
}
With a slightly different input domain but it can be adapted easily:
float Hue : register(C0); // 0..360, default 0
float Saturation : register(C1); // 0..2, default 1
float Luminosity : register(C2); // -1..1, default 0
sampler2D input1 : register(S0);
static float3x3 matrixH =
{
0.8164966f, 0, 0.5352037f,
-0.4082483f, 0.70710677f, 1.0548190f,
-0.4082483f, -0.70710677f, 0.1420281f
};
static float3x3 matrixH2 =
{
0.84630f, -0.37844f, -0.37844f,
-0.37265f, 0.33446f, -1.07975f,
0.57735f, 0.57735f, 0.57735f
};
float4 main(float2 uv : TEXCOORD) : COLOR
{
float4 c = tex2D(input1, uv);
float3x3 rotateZ =
{
cos(radians(Hue)), sin(radians(Hue)), 0,
-sin(radians(Hue)), cos(radians(Hue)), 0,
0, 0, 1
};
matrixH = mul(matrixH, rotateZ);
matrixH = mul(matrixH, matrixH2);
float i = 1 - Saturation;
float3x3 matrixS =
{
i*0.3086f+Saturation, i*0.3086f, i*0.3086f,
i*0.6094f, i*0.6094f+Saturation, i*0.6094f,
i*0.0820f, i*0.0820f, i*0.0820f+Saturation
};
matrixH = mul(matrixH, matrixS);
float3 c1 = mul(c, matrixH);
c1 += Luminosity;
return float4(c1, c.a);
}
The conversions between colour spaces are available at EasyRGB, see this post at nokola.com for a Silverlight implementation of a Hue shift. It may be possible to fit hue, saturation and brightness in one PS 2.0 shader if you take the approach mentioned here but I haven't tried.
Here is a great excample how to work with hue.
http://www.silverlightshow.net/news/Hue-Shift-in-Pixel-Shader-2.0-EasyPainter-Silverlight.aspx

WPF pixel shader with HDR (16-bit) input?

I'm trying to build an image viewer for 16-bit PNG images with WPF. My idea was to load the images with PngBitmapDecoder, then put them into an Image control, and control the brightness/contrast with a pixel shader.
However, I noticed that the input to the pixel shader seems to be converted to 8 bit already. Is that a known limitation of WPF or did I make a mistake somewhere? (I checked that with a black/white gradient image that I created in Photoshop and which is verified as 16-bit image)
Here's the code to load the image (to make sure that I load with the full 16-bit range, just writing Source="test.png" in the Image control loads it as 8-bit)
BitmapSource bitmap;
using (Stream s = File.OpenRead("test.png"))
{
PngBitmapDecoder decoder = new PngBitmapDecoder(s,BitmapCreateOptions.PreservePixelFormat, BitmapCacheOption.OnLoad);
bitmap = decoder.Frames[0];
}
if (bitmap.Format != PixelFormats.Rgba64)
MessageBox.Show("Pixel format " + bitmap.Format + " is not supported. ");
bitmap.Freeze();
image.Source = bitmap;
I created the pixel shader with the great Shazzam shader effect tool.
sampler2D implicitInput : register(s0);
float MinValue : register(c0);
float MaxValue : register(c1);
float4 main(float2 uv : TEXCOORD) : COLOR
{
float4 color = tex2D(implicitInput, uv);
float t = 1.0f / (MaxValue-MinValue);
float4 result;
result.r = (color.r - MinValue) * t;
result.g = (color.g - MinValue) * t;
result.b = (color.b - MinValue) * t;
result.a = color.a;
return result;
}
And integrated the shader into XAML like this:
<Image Name="image" Stretch="Uniform">
<Image.Effect>
<shaders:AutoGenShaderEffect x:Name="MinMaxShader" Minvalue="0.0" Maxvalue="1.0>
</shaders:AutoGenShaderEffect>
</Image.Effect>
</Image>
I just got a response from Microsoft. It's really a limitation in the WPF rendering system. I'll try D3DImage as a workaround.

Resources