WPF trying to use pixelshader to disable certain channels - wpf

I'm trying to use a pixel shader to disable specific channels on an image. Unfortunately, I can't seem to get my shader to work, nor do I know how to do step-through debugging on this. I've tried PIX for windows, but haven't gotten any success with that tool.
Here's my shader file: ChannelEffect .fx
sampler2D implicitInputSampler : register(S0);
float4 main(float2 uv : TEXCOORD) : COLOR
{
// Get the source color
float4 color = tex2D(implicitInputSampler, uv);
color.g = 0.0f;
color.b = 0.0f;
// Return new color
return color;
}
Right now I'm hard-coding the channels I'm disabling, just to test it. This sample should make only the red channel appear.
ChannelEffect channelEffect = GetChannelEffect(displayChannel);
image.Effect = channelEffect;
dc.DrawImage(image.Source, destRect);
The end result I get is that the image renders as normal. Its as if I'm not even applying a shader at all. Any ideas?

I was doing something similar- and found Shazzam which is a fantastic program that not only will allow you to tweak and twiddle with shaders but will even generate code for you. I didn't use that code myself, but it gives a great example on how to use shaders with C# and XAML.
You can even import your own images to test with until you get your shader code exactly right.

Related

Modify Metal fragment shading based on vertex world position Y coordinate

I am trying to use a Metal fragment shader with SCNTechnique to modify the fragment color based on the vertex Y world position.
My understanding so far
SCNTechnique can be configured with a sequence of render passes. A render pass allows for injection of a vertex and a fragment shader. These shaders are written in Metal. The Metal Shading Language Specification describes what inputs/outputs are supported for these two.
The vertex shader is called for every vertex that's being rendered. We can pass additional information from the vertex shader to the fragment shader (like position in 3D space, see MSLS section 5.2).
The fragment shader is closest to a pixel, and might be called multiples time for a single "pixel", if there are multiple triangles that "qualify" for that pixel. (Usually) after fragment shading, a fragment might be discarded if it fails the depth or stencil test.
What I attempted
This is what I attempted. (I hope it makes clear where my understanding is lacking).
struct VertexOut {
float4 position [[position]];
};
vertex VertexOut innerVertexShader(VertexIn in [[stage_in]]) {
VertexOut out;
out.position = in.position;
return out;
};
fragment half4 innerFragmentShader(VertexOut in [[stage_in]],
half4 color [[color(0)]]) {
half4 output;
output = color; // test to see if getting rendered color works
output.g = in.position.y; // test to see if getting y works
return output;
}
These shaders are referenced inside an SCNTechnique dictionary.
[
"passes": [
"innerPass: [
"draw": "DRAW_NODE",
"node": "inner",
"metalVertexShader": "innerVertexShader",
"metalFragmentShader": "innerFragmentShader"
]
],
"sequence": ["innerPass"],
"symbols": [:],
"targets": [:],
]
// ...
let technique = SCNTechnique(dictionary: techniqueDictionary)
This does the following: the technique is instantiated correctly and attached to the scene (because it affects the rendering). But it appears to not apply the camera transform or node position transform to the vertices. And instead renders each node as being viewed from (0,0,1) at position (0,0,0). The colors are wrong. If I remove the shaders from the SCNTechnique, every renders like I would expect.
How can I leverage regular SceneKit behavior (camera transform etc.), and only modify the color output based on the fragments' y world position? I'd expect that needs to happen on a fragment level, using the world position somehow obtained in the vertex shader. I have searched for things like "Metal basic vertex shader" and have come up with naught.
I have seen shaders like this but I'm convinced I should be able to rely on SceneKit rendering for stuff like lighting, PBR materials, camera transforms, etc. At this point I feel like whenever I search for some Metal topic, I end up on the same websites which haven't succeeded yet in taking my understanding to the next level. So, any new/additional resources are appreciated as well.
Background
For the past two months I have been working on my own game project, which uses SceneKit as the main graphics framework. I have turned to SCNTechnique and Metal shaders for custom effects. These last two in particular have given me solid headaches, both on the lack of sample code/documentation/runtime feedback.
I have considered moving to Unity/Unreal or even cancelling this project altogether because of this. But because I'm stubborn and also because I really don't want to port my Swift code to C#/C++, I haven't given up on SceneKit yet.
Having spent the past couple of days investigating this topic, my understanding of vertex and fragment shading and how SceneKit tackles these things has developed significantly.
As #mnuages pointed out in a comment, for this use case shader modifiers are the way to go. They leverage default SceneKit shading (as asked by OP) and allow for shader code injection.
Additional information
To compensate for some of the limitations of SceneKit documentation, I’ll elaborate a bit for other people looking into the subject.
For more information on how the shader modifiers tie into SceneKit default vertex/fragment shaders, see my answer to a related question or SceneKit's default shaders. The second link demonstrates the extent of SceneKit’s rendering logic that you get for free when leveraging shader modifiers instead of writing your own shader.
This page helped me build an understanding of the different stages of vector transforms from vertex to fragment (model space ➡️ world space ➡️ camera space ➡️ projection space).
Alternate approach (custom shader)
If you want to have single pass with a fully customized shader, this is a simple example. It passes the world y position from the vertex shader to the fragment shader.
// Shaders.metal file in your Xcode project
#include <metal_stdlib>
using namespace metal;
#include <SceneKit/scn_metal>
typedef struct {
float4x4 modelTransform;
float4x4 modelViewTransform;
} commonprofile_node;
struct VertexIn {
float3 position [[attribute(SCNVertexSemanticPosition)]];
};
struct VertexOut {
float4 fragmentPosition [[position]];
float height;
};
vertex VertexOut myVertex(
VertexIn in [[stage_in]],
constant SCNSceneBuffer& scn_frame [[buffer(0)]],
constant commonprofile_node& scn_node [[buffer(1)]]
) {
VertexOut out;
float4 position = float4(in.position, 1.f);
out.fragmentPosition = scn_frame.viewProjectionTransform * scn_node.modelTransform * position;
// store world position for fragment shading
out.height = (scn_node.modelTransform * position).y;
return out;
}
fragment half4 myFragment(VertexOut in [[stage_in]]) {
return half4(in.height);
}
let dictionary: [String: Any] = [
"passes" : [
"y" : [
"draw" : "DRAW_SCENE",
"inputs" : [:],
"outputs" : [
"color" : "COLOR"
],
"metalVertexShader": "myVertex",
"metalFragmentShader": "myFragment",
]
],
"sequence" : ["y"],
"symbols" : [:]
]
let technique = SCNTechnique(dictionary: dictionary)
scnView.technique = technique
You could combine this render pass with other passes (see SCNTechnique).

What is SceneKit doing between calls to didApplyConstraints and willRenderScene?

The SceneKit rendering loop is well documented here https://developer.apple.com/documentation/scenekit/scnscenerendererdelegate and here https://www.raywenderlich.com/1257-scene-kit-tutorial-with-swift-part-4-render-loop. However neither of these documents explains what SceneKit does between calls to didApplyConstraints and willRenderScene.
I've modified my SCNSceneRendererDelegate to measure the time between each call and I can see that around 5ms elapses between those two calls. It isn't running my code in that time, but presumably some aspect of the way I've set up my scene is creating work which has to be done there. Any insight into what SceneKit is doing would be very helpful.
I am calling SceneKit myself from an MTKView's draw call (rather than using an SCNView) so that I can render the scene twice. The first render is normal, the second uses the depth buffer from the first but draws just a subset of the scene that I want to "glow" onto a separate colour buffer. That colour buffer is then scaled down, gaussian blurred, scaled back up and then blended over the top of the first scene (all with custom Metal shaders).
The 5ms spent between didApplyConstraints and willRenderScene started happening when I introduced this extra rendering pass. To control which nodes are in each scene I switch the opacity of a small number of parent nodes between 0 and 1. If I remove the code which switches opacity but keep everything else (so there are two rendering passes but they both draw everything) the extra 5ms disappears and the overall frame rate is actually faster even though much more rendering is happening.
I'm writing Swift targeting MacOS on a 2018 MacBook Pro.
UPDATE: mnuages has explained that changing the opacity causes SceneKit to rebuild the scene graph and it that explains part of the lost time. However I've now discovered that my use of a custom SCNProgram for the nodes in one rendering pass also triggers a 5ms pause between didApplyConstraints and willRenderScene. Does anyone know why this might be?
Here is my code for setting up the SCNProgram and the SCNMaterial, both done once:
let device = MTLCreateSystemDefaultDevice()!
let library = device.makeDefaultLibrary()
glowProgram = SCNProgram()
glowProgram.library = library
glowProgram.vertexFunctionName = "emissionGlowVertex"
glowProgram.fragmentFunctionName = "emissionGlowFragment"
...
let glowMaterial = SCNMaterial()
glowMaterial.program = glowProgram
let emissionImageProperty = SCNMaterialProperty(contents: emissionImage)
glowMaterial.setValue(emissionImageProperty, forKey: "tex")
Here's where I apply the material to the nodes:
let nodeWithGeometryClone = nodeWithGeometry.clone()
nodeWithGeometryClone.categoryBitMask = 2
let geometry = nodeWithGeometryClone.geometry!
nodeWithGeometryClone.geometry = SCNGeometry(sources: geometry.sources, elements: geometry.elements)
glowNode.addChildNode(nodeWithGeometryClone)
nodeWithGeometryClone.geometry!.firstMaterial = glowMaterial
The glow nodes are a deep clone of the regular nodes, but with an alternative SCNProgram. Here's the Metal code:
#include <metal_stdlib>
using namespace metal;
#include <SceneKit/scn_metal>
struct NodeConstants {
float4x4 modelTransform;
float4x4 modelViewProjectionTransform;
};
struct EmissionGlowVertexIn {
float3 pos [[attribute(SCNVertexSemanticPosition)]];
float2 uv [[attribute(SCNVertexSemanticTexcoord0)]];
};
struct EmissionGlowVertexOut {
float4 pos [[position]];
float2 uv;
};
vertex EmissionGlowVertexOut emissionGlowVertex(EmissionGlowVertexIn in [[stage_in]],
constant NodeConstants &scn_node [[buffer(1)]]) {
EmissionGlowVertexOut out;
out.pos = scn_node.modelViewProjectionTransform * float4(in.pos, 1) + float4(0, 0, -0.01, 0);
out.uv = in.uv;
return out;
}
constexpr sampler linSamp = sampler(coord::normalized, address::clamp_to_zero, filter::linear);
fragment half4 emissionGlowFragment(EmissionGlowVertexOut in [[stage_in]],
texture2d<half, access::sample> tex [[texture(0)]]) {
return tex.sample(linSamp, in.uv);
}
By changing the opacity of nodes you're invalidating parts of the scene graph which can result in additional work for the renderer.
It would be interesting to see if setting the camera's categoryBitMask is more performant (it doesn't modify the scene graph).

best way in linux to display gif gif87a image from C

What would be the best way, in linux from gnu C and not C++, to display a gif87a file on screen and redisplay it in the same location on the screen so the user can observe changes that are made on the fly to the dataset? This is not an animated gif.
in some old code (fortran77) that has a C wrapper which takes an image that was displayed on the screen and writes it to a gif file, there is a comment about X Window Applications Programming, Ed. 2, Johnson & Reichard that was used as a reference to write the C code to display image data to the screen and write a gif87a file, and this code was written around 1995, the onscreen display of the image no longer works (just a black window) but the creation of the gif file still works. What i would like to do is from the existing C code, in SLES version 11.4 with the libraries that are available to open the gif file and display it on screen. The image, or contour plot, has a color bar that the user sets the min/max value for to display the image to their liking and it would be preferable to make it as easy & efficient for the user to adjust those min max values then redraw the image (re-write the gif then redisplay on screen in same location). There's also a handful of other knobs that the user can turn, such as windowing of the dat (hamming or han) and it would be best if the user can quickly/easily run though about 5+ ways of looking at the image before settling on what is considered correct then using that final gif that was created in powerpoint, excel, etc.
Writing an X11 application is non-trivial. You can display a GIF (or any one of around 200 image formats) using ImageMagick which is included in most Linux distros and is available for macOS. Windows doesn't count.
So, you can create images and manipulate images from the command line, or in C if you want. So, let's create a GIF that is 1024x768 and full of random colours:
convert -size 1024x768 xc:blue +noise random -pointsize 72 -gravity center -annotate 0 "10" image.gif
Now we can display it, using ImageMagick's display program:
display image.gif &
Now we can get its X11 "window-id" with:
xprop -root
...
_NET_ACTIVE_WINDOW(WINDOW): window id # 0x600011
...
...
Now you can change the image, however you like with filters and blurs and morphology and thresholds and convolutions:
convert image.gif -threshold 80% -morphology erode diamond -blur 0x3 -convolve "3x3: -1,0,1, -2,0,2, -1,0,1" ... image.gif
And then tell the display program to redraw the window with:
display -window 0x600011 image.gif
Here is a little script that generates images with a new number in the middle of each frame and updates the screen:
for ((t=0;t<100;t++)) ; do
convert -size 640x480 xc:blue +noise random -pointsize 72 -fill white -gravity center -annotate 0 "$t" image.gif
display -window 0x600011 image.gif
done
Now all you need to do is find a little Python or Tcl/Tk library that draws some knobs and dials, reads their positions and changes the image accordingly and tells the screen to redraw.
As a result of the lack of enthusiasm for my other answer, I thought I'd have another attempt. I had a quick look and learn of Processing which is a very simple language, very similar to C but much easier to program.
Here is a screen shot of it loading a GIF and displaying a couple of twiddly knobs - one of which I attached to do a threshold on the image.
Here's the code - it is not the prettiest in the world because it is my first ever code in Processing but you should be able to see what it is doing and adapt to your needs:
import controlP5.*;
ControlP5 cp5;
int myColorBackground = color(0,0,0);
int knobValue = 100;
float threshold=128;
Knob myKnobA;
Knob myKnobB;
PImage src,dst; // Declare a variable of type PImage
void setup() {
size(800,900);
// Make a new instance of a PImage by loading an image file
src = loadImage("image.gif");
// The destination image is created as a blank image the same size as the source.
dst = createImage(src.width, src.height, RGB);
smooth();
noStroke();
cp5 = new ControlP5(this);
myKnobA = cp5.addKnob("some knob")
.setRange(0,255)
.setValue(50)
.setPosition(130,650)
.setRadius(100)
.setDragDirection(Knob.VERTICAL)
;
myKnobB = cp5.addKnob("threshold")
.setRange(0,255)
.setValue(220)
.setPosition(460,650)
.setRadius(100)
.setNumberOfTickMarks(10)
.setTickMarkLength(4)
.snapToTickMarks(true)
.setColorForeground(color(255))
.setColorBackground(color(0, 160, 100))
.setColorActive(color(255,255,0))
.setDragDirection(Knob.HORIZONTAL)
;
}
void draw() {
background(0);
src.loadPixels();
dst.loadPixels();
for (int x = 0; x < src.width; x++) {
for (int y = 0; y < src.height; y++ ) {
int loc = x + y*src.width;
// Test the brightness against the threshold
if (brightness(src.pixels[loc]) > threshold) {
dst.pixels[loc] = color(255); // White
} else {
dst.pixels[loc] = color(0); // Black
}
}
}
// We changed the pixels in destination
dst.updatePixels();
// Display the destination
image(dst,100,80);
}
void knob(int theValue) {
threshold = color(theValue);
println("a knob event. setting background to "+theValue);
}
void keyPressed() {
switch(key) {
case('1'):myKnobA.setValue(180);break;
case('2'):myKnobB.setConstrained(false).hideTickMarks().snapToTickMarks(false);break;
case('3'):myKnobA.shuffle();myKnobB.shuffle();break;
}
}
Here are some links I used - image processing, P5 library of widgets and knobs.

Scenekit repeat texture added through SCNShadable

I've added a uniform sampler2D uMySampler; through SCNShadable. I believe i'm not seeing the texture because it's not set to be repeat wrapping.
The sample code that i've found does it this way programatically:
myMat?.diffuse.wrapS = SCNWrapMode.repeat
myMat?.diffuse.wrapT = SCNWrapMode.repeat
But how do i set the wrapS on uMySampler?
As a fallback i think i could get away by doing fract(myTexCoord) but that might mess up mipmapping?
let myTexture = SCNMaterialProperty( contents: UIImage(named: "art.scnassets/myTexture.png") );
myTexture.wrapS = SCNWrapMode.repeat
This is the trick, not sure if i find this very intuitive.

Codename One rounded image from an internet source

I am trying to display a rounded image that I get straight from the Internet.
I used the code below to create a round mask, get the image from the Internet, then tried to either set the mask on the image or the label itself. None of these approaches worked. If I remove the mask, the image is displayed fine. If I keep the code to set the mask then all I see is an empty white circle.
I have the idea that if I apply the mask on the image itself, then it may not take effect because the image was not downloaded at the time the mask was applied.
But I don't seem to understand why calling setMask on the label is also not working.
// Create MASK
Image maskImage = Image.createImage(w, l);
Graphics g = maskImage.getGraphics();
g.setAntiAliased(true);
g.setColor(0x000000);
g.fillRect(0, 0, w, l);
g.setColor(0xffffff);
g.fillArc(0, 0, w, l, 0, 360);
Object mask = maskImage.createMask();
// GET IMAGE
com.cloudinary.Cloudinary cloudinary = new com.cloudinary.Cloudinary(ObjectUtils.asMap(
"cloud_name", "REMOVED",
"api_key", "REMOVED",
"api_secret", "REMOVED"));
// Disable private CDN URLs as this doesn't seem to work with free accounts
cloudinary.config.privateCdn = false;
Image placeholder = Image.createImage(150, 150);
EncodedImage encImage = EncodedImage.createFromImage(placeholder, false);
Image img2 = cloudinary.url()
.type("fetch") // Says we are fetching an image
.format("jpg") // We want it to be a jpg
.transformation(
new Transformation()
.radius("max").width(150).height(150).crop("thumb").gravity("faces").image(encImage, "http://upload.wikimedia.org/wikipedia/commons/4/46/Jennifer_Lawrence_at_the_83rd_Academy_Awards.jpg");
Label label = new Label(img2);
label.setMask(mask); // also tried to do img2.applyMask(mask); before passing img2
So I tried various things:
1) Removing the mask that was set through cloudinary - That did not work
2) applied the mask to the placeholder & encoded image (as expected these shouldnt affect the final version that is getting published)
3) This is what works! I am not sure if the issue is really with downloading the picture before or after applying the mask.. time can tell down the road
Label label = new Label();
img2.applyMask(mask); // If you remove this line , the image will no longer be displayed, I will only see a rounded white circle ! I am not sure what this is doing, it might be simply stalling the process until the image is downloaded? or maybe somehow calling repaint or revalidate
label.setIcon( img2.applyMask(mask));
Here is what worked for me if anyone else is having similar issues:
//CREATE MASK
Image maskImage = Image.createImage(w, l);
Graphics g = maskImage.getGraphics();
g.setAntiAliased(true);
g.setColor(0x000000);
g.fillRect(0, 0, w, l);
g.setColor(0xffffff);
g.fillArc(0, 0, w, l, 0, 360);
Object mask = maskImage.createMask();
//CONNECT TO CLOUDINARY
com.cloudinary.Cloudinary cloudinary = new com.cloudinary.Cloudinary(ObjectUtils.asMap(
"cloud_name", "REMOVED",
"api_key", "REMOVED",
"api_secret", "REMOVED"));
// Disable private CDN URLs as this doesn't seem to work with free accounts
cloudinary.config.privateCdn = false;
//CREATE IMAGE PLACEHOLDERS
Image placeholder = Image.createImage(w, l);
EncodedImage encImage = EncodedImage.createFromImage(placeholder, false);
//DOWNLOAD IMAGE
Image img2 = cloudinary.url()
.type("fetch") // Says we are fetching an image
.format("jpg") // We want it to be a jpg
.transformation(
new Transformation()
.crop("thumb").gravity("faces")
.image(encImage, url);
// Add the image to a label and place it on the form.
//GetCircleImage(img2);
Label label = new Label();
img2.applyMask(mask); // If you remove this line , the image will no longer be displayed, I will only see a rounded white circle ! I am not sure what this is doing, it might be simply stalling the process until the image is downloaded? or maybe somehow calling repaint or revalidate
label.setIcon( img2.applyMask(mask));
Shai, I seriously appreciate your time!! Thank you very much. Will have to dig more into it if it gives me any other problems later but it seems to consistently work for now.
The Cloudinary API returns a URLImage which doesn't work well with the Label.setMask() method because, technically, a URLImage is an animated image (it is a placeholder image until it finishes loading, and then "animates" to become the target image).
I have just released a new version of the cloudinary cn1lib which gives you a couple of options for working around this.
I have added two new image() methods. One that takes an ImageAdapter parameter that you can use to apply the mask to the image itself, before setting it as the icon for the label. Then you wouldn't use Label.setMask() at all.
See javadocs for this method here
The other method uses the new Async image loading APIs underneath to load the image asynchronously. The image you receive in the callback is a "real" image so you can use it with a mask normally.
See javadocs for this method here
We are looking at adding a soft warning to the Label.setMask() and setIcon() methods if you try to add an "animated" image and mask it so that it is more clear.
I think the making code you set to the label might be conflicting with the masking code you get from Cloudinary.

Resources