I'd like to color some icons in a WPF application using a pixel shader. However, when I go to get the DirectX SDK to get fxc, I see that it's deprecated -- I find links to DirectX11 and the new "Effects" system in WPF. However, that seems to be deprecated as well.
I'm wondering: what is the current practice to get a pixel shader into WPF? For context (though this shouldn't matter technically), I'm using a Prism/Unity-based MVVM architecture, so I'd like to eventually handle these colors through XAML binding.
I see alot of people using SlimDX, but I'd really like to avoid introducing yet another library dependency into my application.
The DirectX SDK was merged into the Windows SDK back when Windows 8 was released [source], which is where you can find the newer versions of FXC.
Took me a few hours to get find a solution so i wanted to share it:
Visual Studio:
Add below batch script to Project(right click Project in the Solution Explorer Window)>Properties>BuildEvents>"Pre-build event command line:"-field) and adapt the path.
echo COMPILING SHADERS:
set Compiler=C:\Program Files (x86)\Windows Kits\10\bin\10.0.18362.0\x86\fxc.exe
set OutputFile=$(ProjectDir)Resources\SearchColorFilter.ps
set InputFile=$(ProjectDir)Resources\SearchColorFilter.fx
"%Compiler%" /O0 /Fc /Zi /T ps_2_0 /Fo "%OutputFile%" "%InputFile%"
here is an Example of a pixel shader, it replaces magenta-like colors with white, keeps yellow and greys out the rest
SearchColorFilter.ps:
sampler2D implicitInputSampler : register(S0);
float filterControl : register(C0);
float4 main(float2 uv : TEXCOORD) : COLOR{
float4 color = tex2D(implicitInputSampler, uv);
if (filterControl == 1) {
//modify magenta tint into white, Magenta: float4(1,0,1,1)
if (color.r > 0.35 && color.g < 0.2 && color.b > 0.35) return float4(color.r, color.r * 0.5 + color.b * 0.5, color.b, 1);//MAGENTA-->WHITE
//keep yellow, Yellow: float4(1,0,1,1)
if (color.r > 0.35 && color.g > 0.35 && color.b < 0.2) return color;//YELLOW-->YELLOW
//grey out rest
return float4(color.r * 0.5, color.g * 0.5, color.b * 0.5, color.a);//OTHER -> HALF VALUE (= grey out)
}
return color;
}
Don't forget to:
add the created .ps-file to your project via the Solution Explorer and set its Build Action to "Resource"
save the .fx file with correct Encoding: "US-ASCII - Codepage 20127"
BONUS:
In order to apply a shader to an UIElement, this can be done by creating a class
SearchColorFilter.cs:
using System.Windows;
using System.Windows.Media;
using System.Windows.Media.Effects;
public class SearchColorFilter : ShaderEffect
{
static SearchColorFilter()
{
var uri = MakePackUri("Resources/SearchColorFilter.ps");
_pixelShader.UriSource = uri;
}
private static PixelShader _pixelShader = new PixelShader();
public SearchColorFilter()
{
this.PixelShader = _pixelShader;
UpdateShaderValue(InputProperty);
UpdateShaderValue(FilterControlProperty);
}
public Brush Input
{
get { return (Brush)GetValue(InputProperty); }
set { SetValue(InputProperty, value); }
}
public static readonly System.Windows.DependencyProperty InputProperty =
ShaderEffect.RegisterPixelShaderSamplerProperty("Input", typeof(SearchColorFilter), 0);
public double FilterControl
{
get { return (double)GetValue(FilterControlProperty); }
set { SetValue(FilterControlProperty, value); }
}
public static System.Uri MakePackUri(string relativeFile)
{
System.Reflection.Assembly a = typeof(SearchColorFilter).Assembly;
string assemblyShortName = a.ToString().Split(',')[0];
string uriString = "pack://application:,,,/" + assemblyShortName + ";component/" + relativeFile;
return new System.Uri(uriString);
}
public static readonly DependencyProperty FilterControlProperty =
DependencyProperty.Register("FilterControl", typeof(double), typeof(SearchColorFilter),
new UIPropertyMetadata(0.0d, PixelShaderConstantCallback(0)));
}
Now you can assign the shader to any UIElement using
SearchColorFilter recolorShader = new SearchColorFilter();
recolorShader.FilterControl = 1;
MyUIElement.Effect = recolorShader;
Related
I'm trying to display the separate R, G, B and A channels of a texture based on user input. I'm using an Image class to display textures that have alpha channels. These textures are loaded in to BitmapSource objects and have a Format of Bgra32. The problem is that when I set the Image's Source to the BitmapSource, if I display any combination of the R, G or B channels, I always get pixels that are pre-multiplied by the alpha value. I wrote a really simple shader, pre-compiled it, and used a ShaderEffect class to assign to the Image's Effect property in order to separate and display the separate channels, but apparently, the shader is given the texture after WPF has pre-multiplied the alpha value onto the texture.
Here's the code snippet for setting the Image's Source:
BitmapSource b = MyClass.GetBitmapSource(filepath);
// just as a test, write out the bitmap to file to see if there's an alpha.
// see attached image1
BmpBitmapEncoder test = new BmpBitmapEncoder();
//test.Compression = TiffCompressOption.None;
FileStream stest = new FileStream(#"c:\temp\testbmp2.bmp", FileMode.Create);
test.Frames.Add(BitmapFrame.Create(b));
test.Save(stest);
stest.Close();
// effect is a class derived from ShaderEffect. The pixel shader associated with the
// effect displays the color channel of the texture loaded in the Image object
// depending on which member of the Point4D is set. In this case, we are showing
// the RGB channels but no alpha
effect.PixelMask = new System.Windows.Media.Media3D.Point4D(1.0f, 1.0f, 1.0f, 0.0f);
this.image1.Effect = effect;
this.image1.Source = b;
this.image1.UpdateLayout();
Here's the shader code (its pretty simple, but I figured I'd include it just for completeness):
sampler2D inputImage : register(s0);
float4 channelMasks : register(c0);
float4 main (float2 uv : TEXCOORD) : COLOR0
{
float4 outCol = tex2D(inputImage, uv);
if (!any(channelMasks.rgb - float3(1, 0, 0)))
{
outCol.rgb = float3(outCol.r, outCol.r, outCol.r);
}
else if (!any(channelMasks.rgb - float3(0, 1, 0)))
{
outCol.rgb = float3(outCol.g, outCol.g, outCol.g);
}
else if (!any(channelMasks.rgb - float3(0, 0, 1)))
{
outCol.rgb = float3(outCol.b, outCol.b, outCol.b);
}
else
{
outCol *= channelMasks;
}
if (channelMasks.a == 1.0)
{
outCol.r = outCol.a;
outCol.g = outCol.a;
outCol.b = outCol.a;
}
outCol.a = 1;
return outCol;
}
Here's the output from the code above:
(sorry, i don't have enough reputation points to post images or apparently more than 2 links)
The file save to disk (C:\temp\testbmp2.bmp) in photoshop:
http://screencast.com/t/eeEr5kGgPukz
Image as displayed in my WPF application (using image mask in code snippet above):
http://screencast.com/t/zkK0U5I7P7
I've got a fairly straightforward pixel shader—set alpha channel to zero and return.
sampler2D tex : register(s0);
float4 PS(float2 uv : TEXCOORD) : COLOR
{
float4 color = tex2D(tex, uv);
color.a = 0;
return color;
}
I'd assume this would cause the image it's applied to to be completely invisible. However that's not what appears to be happening. Instead, the resulting image will become invisible over a white background, but over a black background it'll be unchanged. It appears that this shader is somehow causing an "add" function to be called between the foreground and the background.
For example, the following code loads a foreground and background image, applies the above shader effect to the foreground, renders them to a bitmap, and writes the bitmap to file.
public sealed partial class MainWindow
{
public MainWindow()
{
InitializeComponent();
}
void Button_Click(object sender, RoutedEventArgs e)
{
const int width = 1024;
const int height = 768;
var sz = new Size(width, height);
var background = new Image { Source = new BitmapImage(new Uri(#"c:\background.jpg")) };
background.Measure(sz);
background.Arrange(new Rect(sz));
var foreground = new Image { Source = new BitmapImage(new Uri(#"c:\foreground.jpg")), Effect = new Alpha() };
foreground.Measure(sz);
foreground.Arrange(new Rect(sz));
var target = new RenderTargetBitmap(width, height, 96d, 96d, PixelFormats.Default);
target.Render(background);
target.Render(foreground);
var jpg = new JpegBitmapEncoder();
jpg.Frames.Add(BitmapFrame.Create(target));
using (var fileStream = File.OpenWrite(#"c:\output.jpg"))
{
jpg.Save(fileStream);
}
}
}
// Standard ShaderEffect stuff here, nothing exciting.
public sealed class Alpha : ShaderEffect
{
static readonly PixelShader Shader = new PixelShader{UriSource = new Uri("pack://application:,,,/Alpha.ps", UriKind.RelativeOrAbsolute)};
public static readonly DependencyProperty InputProperty = RegisterPixelShaderSamplerProperty("Input", typeof(Alpha), 0);
public Alpha()
{
PixelShader = Shader;
UpdateShaderValue(InputProperty);
}
public Brush Input
{
get { return (Brush)GetValue(InputProperty); }
set { SetValue(InputProperty, value); }
}
}
This produces the following when applied to two of the Win7 sample pictures:
This is the same behavior I see on the screen when I apply the effect to one Image in XAML, with another Image or anything else behind it.
Note the image is the same if foreground and background are reversed, so if it's not "add", it's at least something commutative. I think it's "add".
Computers are usually right, so I assume this is user error, but why is setting alpha to zero not giving me a transparent image? And how do I get a transparent image if so? (I obviously want to do something more complex with the shader eventually (specifically greenscreen), but to get that to work I have to get this shader to work first, so don't just say "Set the Opacity property").
Gah, stackoverflow is better than google. The top "related question" had the answer. Handling alpha channel in WPF pixel shader effect
I want to make an app that has Silverlight menus but the game part of the app is XNA. I am trying to get the Silverlight/XNA split working using the code from this example game and the method of doing XNA rendering in Silverlight here. Combining these 2 tutorials I have source code that looks like this:
using System;
using System.Collections.Generic;
using System.Linq;
using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Audio;
using Microsoft.Xna.Framework.Content;
using Microsoft.Xna.Framework.GamerServices;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;
using Microsoft.Xna.Framework.Input.Touch;
using Microsoft.Xna.Framework.Media;
using Microsoft.Phone.Controls;
using System.Windows.Navigation;
namespace FYP
{
public partial class GamePage : PhoneApplicationPage
{
GameTimer timer;
SpriteBatch spriteBatch;
Texture2D ballTexture;
IList<Ball> balls = new List<Ball>();
bool touching = false;
public GamePage(ContentManager contentManager)
{
InitializeComponent();
//base.Initialize();
// Create a timer for this page
timer = new GameTimer();
timer.UpdateInterval = TimeSpan.FromTicks(333333);
//timer.Update += OnUpdate;
//timer.Draw += OnDraw;
// TODO: use this.Content to load your game content here
ballTexture = contentManager.Load<Texture2D>("Ball");
}
protected override void OnNavigatedTo(NavigationEventArgs e)
{
base.OnNavigatedTo(e);
// Set the sharing mode of the graphics device to turn on XNA rendering
SharedGraphicsDeviceManager.Current.GraphicsDevice.SetSharingMode(true);
timer.Start();
// Create a new SpriteBatch, which can be used to draw textures.
spriteBatch = new SpriteBatch(SharedGraphicsDeviceManager.Current.GraphicsDevice);
}
protected override void OnNavigatedFrom(NavigationEventArgs e)
{
base.OnNavigatedFrom(e);
// Set the sharing mode of the graphics device to turn off XNA rendering
SharedGraphicsDeviceManager.Current.GraphicsDevice.SetSharingMode(false);
// Stop the timer
timer.Stop();
}
private void OnUpdate(GameTime gameTime)
{
// Allows the game to exit
if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed)
//this.Exit();
// TODO: Add your update logic here
//base.Update(gameTime);
HandleTouches();
UpdateBalls();
}
private void OnDraw(GameTime gameTime)
{
SharedGraphicsDeviceManager.Current.GraphicsDevice.Clear(Microsoft.Xna.Framework.Color.White);
// TODO: Add your drawing code here
foreach (Ball ball in balls)
{
ball.Draw(spriteBatch);
}
//base.Draw(gameTime);
}
private void HandleTouches()
{
TouchCollection touches = TouchPanel.GetState();
if (!touching && touches.Count > 0)
{
touching = true;
Random random = new Random(DateTime.Now.Millisecond);
Color ballColor = new Color(random.Next(255), random.Next(255), random.Next(255));
Vector2 velocity = new Vector2((random.NextDouble() > .5 ? -1 : 1) * random.Next(9), (random.NextDouble() > .5 ? -1 : 1) * random.Next(9)) + Vector2.UnitX + Vector2.UnitY;
//Vector2 center = new Vector2((float)SharedGraphicsDeviceManager.Current.GraphicsDevice.Viewport.Width / 2, (float)SharedGraphicsDeviceManager.Current.GraphicsDevice.Height / 2);
Vector2 center = new Vector2((float)SharedGraphicsDeviceManager.Current.GraphicsDevice.Viewport.Width / 2, 200);
float radius = 25f * (float)random.NextDouble() + 5f;
balls.Add(new Ball(this, ballColor, ballTexture, center, velocity, radius));
}
else if (touches.Count == 0)
{
touching = false;
}
}
private void UpdateBalls()
{
foreach (Ball ball in balls)
{
ball.Update();
}
}
}
}
I do not understand how base works, I've had to comment out base.initialize, update and draw however base.OnNavigatedFrom works.
Also, should I be able to get my code to work in theory? I find it very complicated and although reading about XNA/Silverlight to be possible I can't find any source code where people have successfully combined XNA and Silverlight into the same app.
these videos will help you out to understand XNA and SilverLight/XNA combined platform
http://channel9.msdn.com/Series/Mango-Jump-Start/Mango-Jump-Start-11a-XNA-for-Windows-Phone--Part-1
http://channel9.msdn.com/Series/Mango-Jump-Start/Mango-Jump-Start-11b-XNA-for-Windows-Phone--Part-2
Theoretically XNA and SilverLight XNA Combined platform are pretty much the same, just a difference of bits and pieces, You can even ask XNA to render some SilverLight Control, which will make it easier to handle some button event in your game.
Hope this helps
Im trying to look into using the WPF WriteableBitmap class to allow my application to apply an opacity mask to an image.
Basically I have a blue rectangle as an image, and another 100% transparent green rectangle image over the top of the blue one.
When the user moves their mouse over the green (transparent) image, I want to apply the opacity mask (perhaps using a simple ellipse) so that it looks like a green glow is occurring.
Im purposefully not doing this is XAML and standard WPF effects because I really need it to be super performant and I will eventually swap out the ellipse with a more advance blob...
Any thoughts??
Thanks!
I'm sorry, I don't quite understand your intentions. Maybe if I could see the image, I could answer correctly from start, but here is my first-maybe-wrong answer.
If you say super-performant, you probably want to look at pixel shaders. They are processed by GPU, supported by WPF in a form of a custom effect and are easy to implement. Also you can apply shaders to playing video, while it's hard to do with WritableBitmap.
To write a pixel shader, you need to have FX Compiler (fxc.exe) from DirectX SDK and Shazzam tool - WYSIWYG WPF Shaders compiler by Walt Ritscher.
When you get them both, go ahead and try the following HLSL code
float X : register(C0); // Mouse cursor X position
float Y : register(C1); // Mouse cursor Y position
float4 Color : register(C2); // Mask color
float R : register(C3); // Sensitive circle radius.
sampler2D implicitInputSampler : register(S0);
float4 main(float2 uv : TEXCOORD) : COLOR
{
float4 finalColor = tex2D(implicitInputSampler, uv);
if ( (uv.x - X) * (uv.x - X) + (uv.y - Y) * (uv.y - Y) < R*R)
{
finalColor = Color; // Blend/Change/Mask it as you wish here.
}
return finalColor;
}
This gives you the following C# effect:
namespace Shazzam.Shaders {
using System.Windows;
using System.Windows.Media;
using System.Windows.Media.Effects;
public class AutoGenShaderEffect : ShaderEffect {
public static DependencyProperty InputProperty = ShaderEffect.RegisterPixelShaderSamplerProperty("Input", typeof(AutoGenShaderEffect), 0);
public static DependencyProperty XProperty = DependencyProperty.Register("X", typeof(double), typeof(AutoGenShaderEffect), new System.Windows.UIPropertyMetadata(new double(), PixelShaderConstantCallback(0)));
public static DependencyProperty YProperty = DependencyProperty.Register("Y", typeof(double), typeof(AutoGenShaderEffect), new System.Windows.UIPropertyMetadata(new double(), PixelShaderConstantCallback(1)));
public static DependencyProperty ColorProperty = DependencyProperty.Register("Color", typeof(System.Windows.Media.Color), typeof(AutoGenShaderEffect), new System.Windows.UIPropertyMetadata(new System.Windows.Media.Color(), PixelShaderConstantCallback(2)));
public static DependencyProperty RProperty = DependencyProperty.Register("R", typeof(double), typeof(AutoGenShaderEffect), new System.Windows.UIPropertyMetadata(new double(), PixelShaderConstantCallback(3)));
public AutoGenShaderEffect(PixelShader shader) {
// Note: for your project you must decide how to use the generated ShaderEffect class (Choose A or B below).
// A: Comment out the following line if you are not passing in the shader and remove the shader parameter from the constructor
PixelShader = shader;
// B: Uncomment the following two lines - which load the *.ps file
// Uri u = new Uri(#"pack://application:,,,/glow.ps");
// PixelShader = new PixelShader() { UriSource = u };
// Must initialize each DependencyProperty that's affliated with a shader register
// Ensures the shader initializes to the proper default value.
this.UpdateShaderValue(InputProperty);
this.UpdateShaderValue(XProperty);
this.UpdateShaderValue(YProperty);
this.UpdateShaderValue(ColorProperty);
this.UpdateShaderValue(RProperty);
}
public virtual System.Windows.Media.Brush Input {
get {
return ((System.Windows.Media.Brush)(GetValue(InputProperty)));
}
set {
SetValue(InputProperty, value);
}
}
public virtual double X {
get {
return ((double)(GetValue(XProperty)));
}
set {
SetValue(XProperty, value);
}
}
public virtual double Y {
get {
return ((double)(GetValue(YProperty)));
}
set {
SetValue(YProperty, value);
}
}
public virtual System.Windows.Media.Color Color {
get {
return ((System.Windows.Media.Color)(GetValue(ColorProperty)));
}
set {
SetValue(ColorProperty, value);
}
}
public virtual double R {
get {
return ((double)(GetValue(RProperty)));
}
set {
SetValue(RProperty, value);
}
}
}
}
Now you can track mouse position, and set corresponding properties of your effect to trigger changes. One thing to note here: X and Y in HLSL code are ranged from 0 to 1. So you'll have to convert actual coordinates to percentages, before passing them to shader.
Things to read more about pixel shaders and WPF:
Reference for HLSL on MSDN.
Writing custom GPU-based Effects for WPF by Greg Schechter.
Hope this helps :)
I'm trying to build an image viewer for 16-bit PNG images with WPF. My idea was to load the images with PngBitmapDecoder, then put them into an Image control, and control the brightness/contrast with a pixel shader.
However, I noticed that the input to the pixel shader seems to be converted to 8 bit already. Is that a known limitation of WPF or did I make a mistake somewhere? (I checked that with a black/white gradient image that I created in Photoshop and which is verified as 16-bit image)
Here's the code to load the image (to make sure that I load with the full 16-bit range, just writing Source="test.png" in the Image control loads it as 8-bit)
BitmapSource bitmap;
using (Stream s = File.OpenRead("test.png"))
{
PngBitmapDecoder decoder = new PngBitmapDecoder(s,BitmapCreateOptions.PreservePixelFormat, BitmapCacheOption.OnLoad);
bitmap = decoder.Frames[0];
}
if (bitmap.Format != PixelFormats.Rgba64)
MessageBox.Show("Pixel format " + bitmap.Format + " is not supported. ");
bitmap.Freeze();
image.Source = bitmap;
I created the pixel shader with the great Shazzam shader effect tool.
sampler2D implicitInput : register(s0);
float MinValue : register(c0);
float MaxValue : register(c1);
float4 main(float2 uv : TEXCOORD) : COLOR
{
float4 color = tex2D(implicitInput, uv);
float t = 1.0f / (MaxValue-MinValue);
float4 result;
result.r = (color.r - MinValue) * t;
result.g = (color.g - MinValue) * t;
result.b = (color.b - MinValue) * t;
result.a = color.a;
return result;
}
And integrated the shader into XAML like this:
<Image Name="image" Stretch="Uniform">
<Image.Effect>
<shaders:AutoGenShaderEffect x:Name="MinMaxShader" Minvalue="0.0" Maxvalue="1.0>
</shaders:AutoGenShaderEffect>
</Image.Effect>
</Image>
I just got a response from Microsoft. It's really a limitation in the WPF rendering system. I'll try D3DImage as a workaround.