Smart DropShadowEffect - wpf

Is it possible to have DropShadowEffect to ignore certain colors when rendering shadow? To have sort of masked (color selective) shadow?
My problem is what shadow can be assigned to whole visual element (graph). It looks like this:
And I want
Notice grid lines without shadow (except 0,0 ones). This can be achieved by having 2 synchronized in zooming/offset graphs, one without shadow effect containing grid and another with shadow containing the rest. But I am not very happy about this solution (I predict lots of problems in the future with that solution). So I'd rather prefer to modify DropShadowEffect somehow.
I can create and use ShaderEffect, but I have no knowledge of how to program shaders to have actual shadow effect (if it can be produced by shaders at all).
Perhaps there is much easier way of doing something with DropShadowEffect itself? Anyone?
Edit
I tried to make shader effect:
sampler2D _input : register(s0);
float _width : register(C0);
float _height : register(C1);
float _depth : register(C2); // shadow depth
float4 main(float2 uv : TEXCOORD) : COLOR
{
// get pixel size
float2 pixel = {1 / _width, 1 / _height};
// find color at offset
float2 offset = float2(uv.x - pixel.x * _depth, uv.y - pixel.y * _depth);
float4 color = tex2D(_input, offset);
// convert to gray?
//float gray = dot(color, float4(0.1, 0.1, 0.1, 0));
//color = float4(gray, gray, gray, 1);
// saturate?
//color = saturate(color);
return tex2D(_input, uv) + color;
}
But fail at everything.
Edit
Here is screenshot of graph appearance, which I like (to those, who try to convince me not to do this):
Currently it is achieved by having special Graph which has template
<Border x:Name="PART_Border" BorderThickness="1" BorderBrush="Gray" CornerRadius="4" Background="White">
<Grid>
<Image x:Name="PART_ImageBack" Stretch="None"/>
<Image x:Name="PART_ImageFront" Stretch="None">
<Image.Effect>
<DropShadowEffect Opacity="0.3"/>
</Image.Effect>
</Image>
</Grid>
</Border>
Everything is rendered onto PART_ImageFront (with shadow), while grid is rendered onto PART_ImageBack (without shadow). Performance-wise it is still good.

I have zero experience with pixel shaders, but here's my quick and dirty attempt at a shadow effect that ignores "uncolored" pixels:
sampler2D _input : register(s0);
float _width : register(C0);
float _height : register(C1);
float _depth : register(C2);
float _opacity : register(C3);
float3 rgb_to_hsv(float3 RGB) {
float r = RGB.x;
float g = RGB.y;
float b = RGB.z;
float minChannel = min(r, min(g, b));
float maxChannel = max(r, max(g, b));
float h = 0;
float s = 0;
float v = maxChannel;
float delta = maxChannel - minChannel;
if (delta != 0) {
s = delta / v;
if (r == v) h = (g - b) / delta;
else if (g == v) h = 2 + (b - r) / delta;
else if (b == v) h = 4 + (r - g) / delta;
}
return float3(h, s, v);
}
float4 main(float2 uv : TEXCOORD) : COLOR {
float width = _width; // 512;
float height = _height; // 512;
float depth = _depth; // 3;
float opacity = _opacity; // 0.25;
float2 pixel = { 1 / width, 1 / height };
float2 offset = float2(uv.x - pixel.x * depth, uv.y - pixel.y * depth);
float4 srcColor = tex2D(_input, offset);
float3 srcHsv = rgb_to_hsv(srcColor);
float4 dstColor = tex2D(_input, uv);
// add shadow for colored pixels only
// tweak saturation threshold as necessary
if (srcHsv.y >= 0.1) {
float gray = dot(srcColor, float4(0.1, 0.1, 0.1, 0.0));
float4 multiplier = float4(gray, gray, gray, opacity * srcColor.a);
return dstColor + (float4(0.1, 0.1, 0.1, 1.0) * multiplier);
}
return dstColor;
}
Here it is in action against a (totally legit) chart that I drew in Blend with the pencil tool:
The shader effect is applied on the root panel containing the axes, grid lines, and series lines, and it generates a shadow only for the series lines.
I don't think it's realistic to expect a shader to be able to apply a shadow to the axes and labels while ignoring the grid lines; the antialiasing on the text is bound to intersect the color/saturation range of the grid lines. I think applying the shadow to just the series lines is cleaner and more aesthetically pleasing anyway.

sampler2D input : register(s0);
float4 main(float2 uv : TEXCOORD) : COLOR {
float4 Color;
Color = tex2D( input , uv.xy);
return Color;
}
This is a basic 'do nothing' shader. the line with the text2D call takes the color that normally would be plotted at the current location. (And in this case simply returns it)
Instead of sampling uv.xy you could add an offset vector to uv.xy and return that color. This would shift the entire image in the direction of the offset vector.
You could combine these two:
sample the uv.xy, if set to a visible color plot that color (this will keep all the lines visible at the right location)
if it is transparent, sample a bit to the top left. if it is set to a color you want to have a shadow: return the shadow color.
Step 2. can be changed into: if it is set to a color you do not want to have a shadow, return a transparent color.
The offset and the colors to test and to use as shadow color could be parameters of the effect.
I strongly suggest to play around with Shazzam it will allow you to test your shader and it will generate the C# code for you.
Note that the uv coordinates are are not in pixels but scaled to 0.0 to 1.0.
Addition
A poor man's blur (anti-aliasing) could be obtained by sampling more pixels around the offset and calculating an average of the colors found that should cause a shadow. this will cause more pixels to receive a shadow.
To calculate the shadow color you could simply darken the existing color by multiplying it with a factor in between 0.0 (black) and 1.0 (original color)
By using the average from the blur you can multiply the shadow color again causing the blur to mix with the original color.
More precise (and expensive) would be to translate the rgb values to hls values and use 'official' darkening formulas to determine the shadow color.

Related

why is metal shader gradient lighter as a SCNProgram applied to a SceneKit Node than it is as a MTKView?

I have a gradient, generated by a Metal fragment shader that I've applied to a SCNNode defined by a plane geometry.
It looks like this:
When I use the same shader applied to a MTKView rendered in an Xcode playground, the colors are darker. What is causing the colors to be lighter in the Scenekit version?
Here is the Metal shader and the GameViewController.
Shader:
#include <metal_stdlib>
using namespace metal;
#include <SceneKit/scn_metal>
struct myPlaneNodeBuffer {
float4x4 modelTransform;
float4x4 modelViewTransform;
float4x4 normalTransform;
float4x4 modelViewProjectionTransform;
float2x3 boundingBox;
};
typedef struct {
float3 position [[ attribute(SCNVertexSemanticPosition) ]];
float2 texCoords [[ attribute(SCNVertexSemanticTexcoord0) ]];
} VertexInput;
struct SimpleVertexWithUV
{
float4 position [[position]];
float2 uv;
};
vertex SimpleVertexWithUV gradientVertex(VertexInput in [[ stage_in ]],
constant SCNSceneBuffer& scn_frame [[buffer(0)]],
constant myPlaneNodeBuffer& scn_node [[buffer(1)]])
{
SimpleVertexWithUV vert;
vert.position = scn_node.modelViewProjectionTransform * float4(in.position, 1.0);
int width = abs(scn_node.boundingBox[0].x) + abs(scn_node.boundingBox[1].x);
int height = abs(scn_node.boundingBox[0].y) + abs(scn_node.boundingBox[1].y);
float2 resolution = float2(width,height);
vert.uv = vert.position.xy * 0.5 / resolution;
vert.uv = 0.5 - vert.uv;
return vert;
}
fragment float4 gradientFragment(SimpleVertexWithUV in [[stage_in]],
constant myPlaneNodeBuffer& scn_node [[buffer(1)]])
{
float4 fragColor;
float3 color = mix(float3(1.0, 0.6, 0.1), float3(0.5, 0.8, 1.0), sqrt(1-in.uv.y));
fragColor = float4(color,1);
return(fragColor);
}
Game view controller:
import SceneKit
import QuartzCore
class GameViewController: NSViewController {
#IBOutlet weak var gameView: GameView!
override func awakeFromNib(){
super.awakeFromNib()
// create a new scene
let scene = SCNScene()
// create and add a camera to the scene
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
scene.rootNode.addChildNode(cameraNode)
// place the camera
cameraNode.position = SCNVector3(x: 0, y: 0, z: 15)
// turn off default lighting
self.gameView!.autoenablesDefaultLighting = false
// set the scene to the view
self.gameView!.scene = scene
// allows the user to manipulate the camera
self.gameView!.allowsCameraControl = true
// show statistics such as fps and timing information
self.gameView!.showsStatistics = true
// configure the view
self.gameView!.backgroundColor = NSColor.black
var geometry:SCNGeometry
geometry = SCNPlane(width:10, height:10)
let geometryNode = SCNNode(geometry: geometry)
let program = SCNProgram()
program.fragmentFunctionName = "gradientFragment"
program.vertexFunctionName = "gradientVertex"
let gradientMaterial = SCNMaterial()
gradientMaterial.program = program
geometry.materials = [gradientMaterial]
scene.rootNode.addChildNode(geometryNode)
}
}
As explained in the Advances in SceneKit Rendering session from WWDC 2016, SceneKit now defaults to rendering in linear space which is required to have accurate results from lighting equations.
The difference you see comes from the fact that in the MetalKit case you are providing color components (red, green and blue values) in the sRGB color space, while in the SceneKit case you are providing the exact same components in the linear sRGB color space.
It's up to you to decide which result is the one you want. Either you want a gradient in linear space (that's what you want if you are interpolating some data) or in gamma space (that's what drawing apps use).
If you want a gradient in gamma space, you'll need to convert the color components to be linear because that's what SceneKit works with. Taking the conversion formulas from the Metal Shading Language Specification, here's a solution:
static float srgbToLinear(float c) {
if (c <= 0.04045)
return c / 12.92;
else
return powr((c + 0.055) / 1.055, 2.4);
}
fragment float4 gradientFragment(SimpleVertexWithUV in [[stage_in]],
constant myPlaneNodeBuffer& scn_node [[buffer(1)]])
{
float3 color = mix(float3(1.0, 0.6, 0.1), float3(0.5, 0.8, 1.0), sqrt(1 - in.uv.y));
color.r = srgbToLinear(color.r);
color.g = srgbToLinear(color.g);
color.b = srgbToLinear(color.b);
float4 fragColor = float4(color, 1);
return(fragColor);
}
After learning the root cause of this problem, I did a bit more research on the topic and found another solution. Gamma space rendering can be forced application wide by setting
SCNDisableLinearSpaceRendering to TRUE in the application's plist.
I'm not sure, but it looks to me like your calculation of the size of the node is off, leading your .uv to be off, depending on the position of the node.
You have:
int width = abs(scn_node.boundingBox[0].x) + abs(scn_node.boundingBox[1].x);
int height = abs(scn_node.boundingBox[0].y) + abs(scn_node.boundingBox[1].y);
I would think that should be:
int width = abs(scn_node.boundingBox[0].x - scn_node.boundingBox[1].x);
int height = abs(scn_node.boundingBox[0].y - scn_node.boundingBox[1].y);
You want the absolute difference between the two extremes, not the sum. The sum gets larger as the node moves right and down, because it effectively includes the position.
All of that said, isn't the desired (u, v) already provided to you in in.texCoords?

Why is pango_cairo_show_layout drawing text at a slightly wrong location?

I have a Gtk app written in C running on Ubuntu Linux.
I'm confused about some behavior I'm seeing with the pango_cairo_show_layout function: I get the exact "ink" (not "logical") pixel size of a pango layout and draw the layout using pango_cairo_show_layout on a GtkDrawingArea widget. Right before drawing the layout, I draw a rectangle that should perfectly encompass the text that I'm about to draw, but the text always shows up a little below the bottom edge of the rectangle.
Here is my full code:
// The drawing area widget's "expose-event" callback handler
gboolean OnTestWindowExposeEvent(GtkWidget *pWidget, GdkEventExpose *pEvent, gpointer data)
{
// Note that this window is 365 x 449 pixels
double dEntireWindowWidth = pEvent->area.width; // This is 365.0
double dEntireWindowHeight = pEvent->area.height; // This is 449.0
// Create a cairo context with which to draw
cairo_t *cr = gdk_cairo_create(pWidget->window);
// Draw a red background
cairo_set_source_rgb(cr, 1.0, 0.0, 0.0);
cairo_rectangle(cr, 0.0, 0.0, dEntireWindowWidth, dEntireWindowHeight);
cairo_fill(cr);
// Calculate the padding inside the window which defines the text rectangle
double dPadding = 0.05 * ((dEntireWindowWidth < dEntireWindowHeight) ? dEntireWindowWidth : dEntireWindowHeight);
dPadding = round(dPadding); // This is 18.0
// The size of the text box in which to draw text
double dTextBoxSizeW = dEntireWindowWidth - (2.0 * dPadding);
double dTextBoxSizeH = dEntireWindowHeight - (2.0 * dPadding);
dTextBoxSizeW = round(dTextBoxSizeW); // This is 329.0
dTextBoxSizeH = round(dTextBoxSizeH); // This is 413.0
// Draw a black rectangle that defines the area in which text may be drawn
cairo_set_line_width(cr, 1.0);
cairo_set_antialias(cr, CAIRO_ANTIALIAS_NONE);
cairo_set_source_rgb(cr, 0.0, 0.0, 0.0);
cairo_rectangle(cr, dPadding, dPadding, dTextBoxSizeW, dTextBoxSizeH);
cairo_stroke(cr);
// The text to draw
std::string szText("Erik");
// The font name to use
std::string szFontName("FreeSans");
// The font size to use
double dFontSize = 153.0;
// The font description string
char szFontDescription[64];
memset(&(szFontDescription[0]), 0, sizeof(szFontDescription));
snprintf(szFontDescription, sizeof(szFontDescription) - 1, "%s %.02f", szFontName.c_str(), dFontSize);
// Create a font description
PangoFontDescription *pFontDescription = pango_font_description_from_string(szFontDescription);
// Set up the font description
pango_font_description_set_weight(pFontDescription, PANGO_WEIGHT_NORMAL);
pango_font_description_set_style(pFontDescription, PANGO_STYLE_NORMAL);
pango_font_description_set_variant(pFontDescription, PANGO_VARIANT_NORMAL);
pango_font_description_set_stretch(pFontDescription, PANGO_STRETCH_NORMAL);
// Create a pango layout
PangoLayout *pLayout = gtk_widget_create_pango_layout(pWidget, szText.c_str());
// Set up the pango layout
pango_layout_set_alignment(pLayout, PANGO_ALIGN_LEFT);
pango_layout_set_width(pLayout, -1);
pango_layout_set_font_description(pLayout, pFontDescription);
pango_layout_set_auto_dir(pLayout, TRUE);
// Get the "ink" pixel size of the layout
PangoRectangle tRectangle;
pango_layout_get_pixel_extents(pLayout, &tRectangle, NULL);
double dRealTextSizeW = static_cast<double>(tRectangle.width);
double dRealTextSizeH = static_cast<double>(tRectangle.height);
// Calculate the top left corner coordinate at which to draw the text
double dTextLocX = dPadding + ((dTextBoxSizeW - dRealTextSizeW) / 2.0);
double dTextLocY = dPadding + ((dTextBoxSizeH - dRealTextSizeH) / 2.0);
// Draw a blue rectangle which should perfectly encompass the text we're about to draw
cairo_set_antialias(cr, CAIRO_ANTIALIAS_NONE);
cairo_set_source_rgb(cr, 0.0, 0.0, 1.0);
cairo_rectangle(cr, dTextLocX, dTextLocY, dRealTextSizeW, dRealTextSizeH);
cairo_stroke(cr);
// Set up the cairo context for drawing the text
cairo_set_source_rgb(cr, 1.0, 1.0, 1.0);
cairo_set_antialias(cr, CAIRO_ANTIALIAS_BEST);
// Move to the top left coordinate before drawing the text
cairo_move_to(cr, dTextLocX, dTextLocY);
// Draw the layout text
pango_cairo_show_layout(cr, pLayout);
// Clean up
cairo_destroy(cr);
g_object_unref(pLayout);
pango_font_description_free(pFontDescription);
return TRUE;
}
So, why is the text not being drawn exactly where I tell it to be drawn?
Thanks in advance for any help!
Look at the documentation for pango_layout_get_extents() (this is not mentioned in the docs for pango_layout_get_pixel_extents():
Note that both extents may have non-zero x and y. You may want to use
those to offset where you render the layout.
https://developer.gnome.org/pango/stable/pango-Layout-Objects.html#pango-layout-get-extents
This is because the position that you render the layout at is (as far as I remember) the position of the base line (so something logically related to the text) instead of the top-left corner of the layout (which would be some "arbitrary thing" not related to the actual text).
In the case of your code, I would suggest to add tRectangle.x to dTextLocX (or subtract? I'm not completely sure about the sign). The same should be done with the y coordinate.
TL;DR: Your PangoRectangle has a non-zero x/y position that you need to handle.
Edit: I am not completely sure, but I think Pango handles this just like cairo. For cairo, there is a nice description at http://cairographics.org/tutorial/#L1understandingtext. The reference point is the point you give to cairo. You want to look at the description of bearing.

Displaying RGBA images using Image class w/o pre-multiplied alpha

I'm trying to display the separate R, G, B and A channels of a texture based on user input. I'm using an Image class to display textures that have alpha channels. These textures are loaded in to BitmapSource objects and have a Format of Bgra32. The problem is that when I set the Image's Source to the BitmapSource, if I display any combination of the R, G or B channels, I always get pixels that are pre-multiplied by the alpha value. I wrote a really simple shader, pre-compiled it, and used a ShaderEffect class to assign to the Image's Effect property in order to separate and display the separate channels, but apparently, the shader is given the texture after WPF has pre-multiplied the alpha value onto the texture.
Here's the code snippet for setting the Image's Source:
BitmapSource b = MyClass.GetBitmapSource(filepath);
// just as a test, write out the bitmap to file to see if there's an alpha.
// see attached image1
BmpBitmapEncoder test = new BmpBitmapEncoder();
//test.Compression = TiffCompressOption.None;
FileStream stest = new FileStream(#"c:\temp\testbmp2.bmp", FileMode.Create);
test.Frames.Add(BitmapFrame.Create(b));
test.Save(stest);
stest.Close();
// effect is a class derived from ShaderEffect. The pixel shader associated with the
// effect displays the color channel of the texture loaded in the Image object
// depending on which member of the Point4D is set. In this case, we are showing
// the RGB channels but no alpha
effect.PixelMask = new System.Windows.Media.Media3D.Point4D(1.0f, 1.0f, 1.0f, 0.0f);
this.image1.Effect = effect;
this.image1.Source = b;
this.image1.UpdateLayout();
Here's the shader code (its pretty simple, but I figured I'd include it just for completeness):
sampler2D inputImage : register(s0);
float4 channelMasks : register(c0);
float4 main (float2 uv : TEXCOORD) : COLOR0
{
float4 outCol = tex2D(inputImage, uv);
if (!any(channelMasks.rgb - float3(1, 0, 0)))
{
outCol.rgb = float3(outCol.r, outCol.r, outCol.r);
}
else if (!any(channelMasks.rgb - float3(0, 1, 0)))
{
outCol.rgb = float3(outCol.g, outCol.g, outCol.g);
}
else if (!any(channelMasks.rgb - float3(0, 0, 1)))
{
outCol.rgb = float3(outCol.b, outCol.b, outCol.b);
}
else
{
outCol *= channelMasks;
}
if (channelMasks.a == 1.0)
{
outCol.r = outCol.a;
outCol.g = outCol.a;
outCol.b = outCol.a;
}
outCol.a = 1;
return outCol;
}
Here's the output from the code above:
(sorry, i don't have enough reputation points to post images or apparently more than 2 links)
The file save to disk (C:\temp\testbmp2.bmp) in photoshop:
http://screencast.com/t/eeEr5kGgPukz
Image as displayed in my WPF application (using image mask in code snippet above):
http://screencast.com/t/zkK0U5I7P7

HLSL Shader to Subtract Background Image

I am trying to get an HLSL Pixel Shader for Silverlight to work to subtract the background image from a video image. Can anyone suggest a more sophisticated algorithm than I am using because my algorithm isn't doing it correctly?
float Tolerance : register(C1);
SamplerState ImageSampler : register(S0);
SamplerState BackgroundSampler : register(S1);
struct VS_INPUT
{
float4 Position : POSITION;
float4 Diffuse : COLOR0;
float2 UV0 : TEXCOORD0;
float2 UV1 : TEXCOORD1;
};
struct VS_OUTPUT
{
float4 Position : POSITION;
float4 Color : COLOR0;
float2 UV : TEXCOORD0;
};
float4 PS( VS_OUTPUT input ) : SV_Target
{
float4 color = tex2D( ImageSampler, input.UV );
float4 background = tex2D( BackgroundSampler, input.UV);
if (abs(background.r - color.r) <= Tolerance &&
abs(background.g - color.g) <= Tolerance &&
abs(background.b - color.b) <= Tolerance)
{
color.rgba = 0;
}
return color;
}
To see an example of this, you need a computer with a webcam:
Go to the page http://xmldocs.net/alphavideo/background.html
Press [Start Recording].
Move your body out of the the scene and press [Capture Background].
Then move your body back into the scene and use the slider to adjust the Toleance value to the shader.
EDIT
Single pixel isn't useful for such task, because of noise. So algorithm essence should be to measure similarity between pixel blocks. Recipe pseudo-code (based on correlation measurement):
Divide image into N x M grid
For each N,M cell in grid:
correlation = correlation_between(signal_pixels_of(N,M),
background_pixels_of(N,M)
);
if (correlation > threshold)
show_background_cell(N,M)
else
show_signal_cell(N,M)
This is sequential pseudo code, but it could be easily converted to HLSL shader. Simply each pixel detects to which pixel block it belongs, and after that measures correlation between corresponding blocks. And based on that correlation shows or hides current pixel.
Try this approach,
Good Luck !

WPF pixel shader with HDR (16-bit) input?

I'm trying to build an image viewer for 16-bit PNG images with WPF. My idea was to load the images with PngBitmapDecoder, then put them into an Image control, and control the brightness/contrast with a pixel shader.
However, I noticed that the input to the pixel shader seems to be converted to 8 bit already. Is that a known limitation of WPF or did I make a mistake somewhere? (I checked that with a black/white gradient image that I created in Photoshop and which is verified as 16-bit image)
Here's the code to load the image (to make sure that I load with the full 16-bit range, just writing Source="test.png" in the Image control loads it as 8-bit)
BitmapSource bitmap;
using (Stream s = File.OpenRead("test.png"))
{
PngBitmapDecoder decoder = new PngBitmapDecoder(s,BitmapCreateOptions.PreservePixelFormat, BitmapCacheOption.OnLoad);
bitmap = decoder.Frames[0];
}
if (bitmap.Format != PixelFormats.Rgba64)
MessageBox.Show("Pixel format " + bitmap.Format + " is not supported. ");
bitmap.Freeze();
image.Source = bitmap;
I created the pixel shader with the great Shazzam shader effect tool.
sampler2D implicitInput : register(s0);
float MinValue : register(c0);
float MaxValue : register(c1);
float4 main(float2 uv : TEXCOORD) : COLOR
{
float4 color = tex2D(implicitInput, uv);
float t = 1.0f / (MaxValue-MinValue);
float4 result;
result.r = (color.r - MinValue) * t;
result.g = (color.g - MinValue) * t;
result.b = (color.b - MinValue) * t;
result.a = color.a;
return result;
}
And integrated the shader into XAML like this:
<Image Name="image" Stretch="Uniform">
<Image.Effect>
<shaders:AutoGenShaderEffect x:Name="MinMaxShader" Minvalue="0.0" Maxvalue="1.0>
</shaders:AutoGenShaderEffect>
</Image.Effect>
</Image>
I just got a response from Microsoft. It's really a limitation in the WPF rendering system. I'll try D3DImage as a workaround.

Resources