How do you access a previously shaded texture in a Pixel Shader? - wpf

In WPF, I want to use a pixel shader to modify a composite image i.e. a new image overlaid on top of a previously shaded image. The new image comes in as a largely transparent image except where there is data (think mathematical functions - sine wave, etc). Anyway this process needs to repeat pretty rapidly - compose the currently shaded texture with a new image and then shade the composite image. The problem is that I don't know how to access the previously shaded texture from within my shader.

Basically, you need to add a Texture2D variable in your shader, then set that parameter as the texture you need to access before drawing the new one (i'm unsure of that process in WPF). You do something like this:
//blahblahblah variables here
Texture2D PreviousTexture;
Sampler PreviousTextureSampler = Sampler2D { Texture = PreviousTexture; };
//blahblahblah code here
then you can sample the texture with a tex2D call.

Related

Rectangle to Texture in SDL2 C

Is it possible to create a Rectangle and somehow turn it into a Texture in SDL2 C?
You can easily load images to textures using the image library but making a simple rectangle seems a lot more complicated.
It is generally not meaningful to create a texture in which all pixels are the same color, as that would be a waste of video memory.
If you want to render a single rectangle in a single color without an outline, it would be more efficient to do this directly using the function SDL_RenderFillRect.
If you really want to create a texture for a single rectangle in a single color without an outline, then you can create an SDL_Surface with SDL_CreateRGBSurface, then use SDL_FillRect on that SDL_Surface to set the color, and then use SDL_CreateTextureFromSurface to create a SDL_Texture from that SDL_Surface.

Do DirectX Pixel Shaders Operate on Every Pixel of the Frame Like WPF Pixel Shaders?

From my own trial-and-error experience, it seems that DirectX pixel shaders only run for pixels/fragments that are within the bounds of some geometric primitive rendered by DirectX, and are not run for pixels of the frame that are simply the clear-color.
MSDN says:
Pixel shaders work in concert with vertex shaders; the output of a vertex shader provides the inputs for a pixel shader.
This stands in contrast to WPF pixel shaders, which are run for every pixel of the frame, because WPF doesn't render 3D primitives and therefore doesn't know or care what it means to be a geometric primitive pixel or clear-color pixel.
So for the following image, a DirectX pixel shader would only be run for the area in white, because it corresponds to a geometric primitive output by the vertex shader, but not for the black area, because that's the clear-color. A WPF pixel shader, on the other hand, would be run for every pixel of the frame, both white and black.
Is this understanding correct?
Your understanding is mostly correct - pixel shader invocations are triggered by drawing primitives (e.g. triangles). In fact, a pixel in the window may end up getting more than one pixel shader invocation, if for example a second triangle is drawn on top of the first. This is referred to as overdraw and is generally something to avoid, with the most common method of avoidance being using z-culling.
If you want to trigger a pixel shader for every pixel in the window, simply draw two triangles that make up a "full screen quad", i.e. coordinates (-1,-1) to (1,1). Behind the scenes, this is what WPF essentially does.

SDL2 Texture Render target doesn't have alpha transparency

I've encountered a problem while programming on C with SDL2. I have rendered to a texture simple images of squares that are transparent in the center. But when I draw the texture on which they are rendered they are not see-through. I've tried even changing the transparency of the rendered texture with SDL_SetTextureAlphaMod() but it isn't changing anything. If I change the alpha on the textures that are being rendered(the squares). They get dimmer but still they cover anything behind them. So I'm open to suggestions.
This is an image where I have lowered the alpha on the squares textueres:http://imgur.com/W8dNbBY
First off, you have two methods in SDL2 if you want to have a transparent image.
Method 1: (Static Method)
Use an image editing software and directly change the alpha value there, it will carry on to SDL2.
Method 2: (Dynamic Method)
SDL_SetTextureBlendMode(texture, SDL_BLENDMODE_BLEND);//This sets the texture in blendmode
alpha = xx //this section should be where you alter the alpha value. You can make fade in-fade out effects, etc... Just put the changes here.
SDL_SetTextureAlphaMod(texture, alpha); //sets the alpha into the texture
SDL_RenderCopy(renderer, texture, NULL, &rect); //Redraws the image with a fresh, new alpha ~

Run OpenGL shader to modify existing texture / frame buffer

How do I use an OpenGL shader to modify an existing texture / frame buffer in place without using a second texture or frame buffer? Is this even possible?
At first:
It is technically possible and safe to read and write in the same pass using this extension - however i wouldn't recommend this, especially for learners, since the extension is tightly limited and might not be supported on every hardware.
That being said:
So it's not possible to use the same texture as a sampler and frame buffer?
You can use the same texture as a framebuffer texture attachment and render to it and as a texture sampler to look up values in the shader, but not in the same pass. That means, if you have two textures you could read from A and write to B and afterwards switch textures and read from B and write to A. But never A->A or B->B (without the extension mentioned).
As a technical detail, a texture currently being used as a target can also be bound to a sampler shader variable at the same time, but you must not use it.
So let's say I want to blur just a small part of a texture. I have to run it through a shader to a second texture and then copy that texture back to the first texture / frame buffer?
Second texture yes. But for efficiency reasons do not copy the texture data back. Just delete the source texture and use the target texture you have rendered to in the future. If you have to do this often, keep the source texture as a render target for later use to increase performance. If you have to do it every frame just swap the textures every frame. The overhead is minimal.
It's tricky to make it using the following code.
// in fragment shader
uniform sampler2D input;
out vec4 color;// bind sampler and framebuffer with same texture
void main(){
ivec2 pos = ivec2(gl_FragCoord.xy);
if(cond){
color = vec4(0.0);
}else{
color = texelFetch(input, pos, 0);
}
}

Applying a pixel shader to a ViewPort3D

I'm new to pixel shaders, and I'm trying to apply an underwater-effect to my 3d scene. I can apply it to an image and animate it easily enough, but not to my ViewPort3D. The computer just hangs when calling BeginAnimation on the effect being applied to the Viewport3D. Is this something that cannot be done in WPF?
After a little digging I learned that pixel shaders are only applied to 2 dimensional types, like images. So what I would need is called a vertex shader, which for WPF, there are none.

Resources