Can pixel shaders be used when rendering to an offscreen surface? - wpf

I'm considering integrating some D3D code I have with WPF via the new D3DImage as described
here:
My question is this: Do pixel shader's work on offscreen surfaces?

Rendering to an offscreen surface is generally less constrained than rendering directly to a back buffer. The only constraints that come with using an offscreen surface with D3DImage is that it must be in a 32-bit RGB/ARGB format (depending on your platform). Other than that, all that the hardware has to offer is at your disposal.
In fact, tons of shader effects take advantage of offscreen surfaces for multipass, or full screen post-processing.

I don't know if there's anything special about it with WPF, but in general yes, pixel shaders work on offscreen surfaces.

For some effects rendering to a different surface is required - glass refraction in front of a shader-rendered scene for example. Pixel shaders cannot access the current screen contents and so the view has to be first rendered to a buffer and then used as a texture in the refraction shader pass so that it can take the background colour from a pixel other than the one being calculated.

Related

Is it technically possible to render a wpf Xaml element to a direct3d texture?

Lets say I have a simple WPF canvas where I draw a few buttons and shapes using xaml. I would like to render the canvas to a direct3d texture so I can have access to the pixels from within the GPU.
RenderTargetBitmap allows me to do software rendering. but this will be limiting in terms of performance as I will have to manually copy the pixels to where I want.
I also looked into using a custom shader effect on the canvas. but as far as I know it is impossible to write to a separate texture using direct3d 9.
So is it at all possible? if so how?

How to work with Sprite - Byte Array Assembly x86

In the last days, while I'm working on a project, I was introduced to the sprite - Byte Array.
Unfortunately, I didnt find out any kond of information about the sprite which can tell me mote about what is this and how it's works.
I really be pleased if you can give me some information and examples for sprite.
A sprite is basically an image with a transparent background color or alpha channel which can be positioned on the screen and moved (usually involving redraw the background over the old position). In the case of an animated sprite, the sprite may consist of several actual images making up the frames of the animation. The format of the image depends entirely on the hardware and/or technology being used to draw or render it. For speed, the dimensions are usually powers of two (8,16,32,64 etc) but this may not be necessary for modern hardware.
Traditionally (read: back in my day), you might have a 320x200x256 screen resolution and a 16x16x256 sprite with color 0 being transparent. Each refresh of the screen would begin with redrawing the background under the sprites, taking a copy of the background under their new position and then redrawing only the visible colors of every sprite in their new position.
With modern hardware, however, it is more efficient to pass data in a format that the driver can handle (hopefully in the graphics accelerator) rather than do everything by hand.

Per-Pixel Lighting in WPF

Is it possible to implement per-pixel-lighting in WPF 3d application (C#) by using their shader effects?
I have a basic 3d application running in WPF but it only shows Gouraud Shading, by interpolating shaded colour values between vertices to the inner of the polygons. I tried to implement a per-pixel lighting approach, like Phong, but I realize that I do not seem to have access to interpolated normals in the WPF pixel shader effects.
Is this the limitation of WPF, where one should better go with C++ and OpenGL/DirectX directly?
as far as I know, yes it is a WPF limitation...contrary to the OpenGL where you can do per-pixel shading, the WPF only allows you to work on a meshGeometry.
I think a way to go with it, would be to create textures that contains your shader effect, and then apply it per triangle.
This would take a longer amount of computation time, but should work.
Regards

Blitting with OpenCL

I am making an OpenCL raycaster, and I'm looking to blit pixels to the screen with the less overhead possible (Every tick counts), as low than calling glClear() every tick, I thought of creating a framebuffer to draw to and passing it to OpenCL, and then blitting with glBlitFramebuffer() but I think that automatically drawing to screen is way better, so there, is there a way to draw pixels with openCL ? Hacky stuff are OK
The best thing I can do now is check out how glClear does it ...
The usual approach is to use OpenCL to draw to a shared OpenGL/OpenCL texture object (created with the clCreateFromGLTexture() function) and then draw it to the screen with OpenGL by rendering a full-screen quad with that texture.
Edit: I've written a small example which uses OpenCL to calculate a mandelbrot fractal and then renders it directly from the GPU to the screen with OpenGL. The code is here: mandelbrot.cpp.

Rendering into GLX back buffer with X calls?

I'm playing around with GLX and xlib and I'm curious about rendering using straight X calls on top of an openGL buffer. The glx intro clearly says that:
GLX extended X servers make a subset of their visuals available for OpenGL rendering. Drawables created with these visual can also be rendered into using the core X renderer and or any other X extension that is compatible with all core X visuals.
And, indeed, I'm able to render a simple quad colored with some rainbow effects and then draw on top of it with xlib calls. However, GLX extends the X window with a back buffer, which I have to swap to the front before I can then directly render to the window. My question is: is it possible to use X to render to the back buffer after openGL is done with it and then swap that buffer wholesale to the front, thus giving me flicker free animation on both my openGL and X graphics?
I think the answer is no, but there are maybe some alternatives.
You could do another layer of double-buffering with a pixmap (render X and GL to a pixmap, then draw the pixmap to your X window). It probably wrecks your framerate if you were doing an FPS game but for what you describe might not matter.
You could also use Cairo to draw to a client-side memory buffer, with alpha channel for the background to show through. Then upload the result as a texture to GL and paint it over your background. The Clutter toolkit does this for some of its drawing.

Resources