Blitting with OpenCL - c

I am making an OpenCL raycaster, and I'm looking to blit pixels to the screen with the less overhead possible (Every tick counts), as low than calling glClear() every tick, I thought of creating a framebuffer to draw to and passing it to OpenCL, and then blitting with glBlitFramebuffer() but I think that automatically drawing to screen is way better, so there, is there a way to draw pixels with openCL ? Hacky stuff are OK
The best thing I can do now is check out how glClear does it ...

The usual approach is to use OpenCL to draw to a shared OpenGL/OpenCL texture object (created with the clCreateFromGLTexture() function) and then draw it to the screen with OpenGL by rendering a full-screen quad with that texture.
Edit: I've written a small example which uses OpenCL to calculate a mandelbrot fractal and then renders it directly from the GPU to the screen with OpenGL. The code is here: mandelbrot.cpp.

Related

Better to draw two times the same sprite or one double-sized sprite?

I'm working on a parallax background and I would like to know which method is the best to draw a scrolling background :
Should I write two sprites one just next to the second and update the position of both or write a single sprite with two times the same pic stick on each other ?
(I'm looking for the best perfs)
Thanks
Usually the best way to answer this is to test it yourself however there is this in the FAQ for SFML:
https://www.sfml-dev.org/faq.php#graphics-xsprite
One of the fastest way to draw in SFML is to switch your implementation to use it's VertexArray as this is only 1 draw call to draw many objects.
SFML has an example how to use a VertexArray here:
https://www.sfml-dev.org/tutorials/2.4/graphics-vertex-array.php
A quick look at the way a sprite is drawn, a Sprite in SFML has 4 verts, each vert has to be transformed by the vertex shader in OpenGL. So if you are drawing the same sprite you are transforming 8 vertex's, where if you double the size, you are drawing 4 verts. The cost on the fragment shader should be relatively the same.
One last note, Get it working now optimize later.

Do DirectX Pixel Shaders Operate on Every Pixel of the Frame Like WPF Pixel Shaders?

From my own trial-and-error experience, it seems that DirectX pixel shaders only run for pixels/fragments that are within the bounds of some geometric primitive rendered by DirectX, and are not run for pixels of the frame that are simply the clear-color.
MSDN says:
Pixel shaders work in concert with vertex shaders; the output of a vertex shader provides the inputs for a pixel shader.
This stands in contrast to WPF pixel shaders, which are run for every pixel of the frame, because WPF doesn't render 3D primitives and therefore doesn't know or care what it means to be a geometric primitive pixel or clear-color pixel.
So for the following image, a DirectX pixel shader would only be run for the area in white, because it corresponds to a geometric primitive output by the vertex shader, but not for the black area, because that's the clear-color. A WPF pixel shader, on the other hand, would be run for every pixel of the frame, both white and black.
Is this understanding correct?
Your understanding is mostly correct - pixel shader invocations are triggered by drawing primitives (e.g. triangles). In fact, a pixel in the window may end up getting more than one pixel shader invocation, if for example a second triangle is drawn on top of the first. This is referred to as overdraw and is generally something to avoid, with the most common method of avoidance being using z-culling.
If you want to trigger a pixel shader for every pixel in the window, simply draw two triangles that make up a "full screen quad", i.e. coordinates (-1,-1) to (1,1). Behind the scenes, this is what WPF essentially does.

SDL2 [optimisation]: is SDL_Surface->texture->copy faster than drawing on renderer?

I'm wondering witch way is more efficient:
modify RGB pixels data from a surface sized as the window, crate a texture from this surface then copy it on the render.
Or (what I use)
SDL_SetRenderDrawColor + SDL_SetRenderDrawPoint directly in the double buffered Renderer, driven by a buffer array
I would prefer the first solution, but I would like to be sure before testing.
Thanks if you know SDL :)
When it comes to per-pixel things, manipulating an SDL_Surface's pixels is generally a little way faster than using SDL_RenderDrawPoint(). Of course, if that's just a pixel or two, it won't make a big difference, but filling an entire window could be slow. Turning this surface into a texture could take a little time, but not as much(On my computer, it adds about 2 ms per frames).
However, for as much as I know, your best option is to access the pixels of the screen's surface(SDL_GetWindowSurface()), and then to use SDL_UpdateWindowSurface().
I believe the SDL_RenderDrawPoint() slowdown is due to the extra time the CPU has to take to pass the pixels to the GPU(Maybe a software SDL_Renderer would act faster?).
Hope this helps.

Rendering into GLX back buffer with X calls?

I'm playing around with GLX and xlib and I'm curious about rendering using straight X calls on top of an openGL buffer. The glx intro clearly says that:
GLX extended X servers make a subset of their visuals available for OpenGL rendering. Drawables created with these visual can also be rendered into using the core X renderer and or any other X extension that is compatible with all core X visuals.
And, indeed, I'm able to render a simple quad colored with some rainbow effects and then draw on top of it with xlib calls. However, GLX extends the X window with a back buffer, which I have to swap to the front before I can then directly render to the window. My question is: is it possible to use X to render to the back buffer after openGL is done with it and then swap that buffer wholesale to the front, thus giving me flicker free animation on both my openGL and X graphics?
I think the answer is no, but there are maybe some alternatives.
You could do another layer of double-buffering with a pixmap (render X and GL to a pixmap, then draw the pixmap to your X window). It probably wrecks your framerate if you were doing an FPS game but for what you describe might not matter.
You could also use Cairo to draw to a client-side memory buffer, with alpha channel for the background to show through. Then upload the result as a texture to GL and paint it over your background. The Clutter toolkit does this for some of its drawing.

Can pixel shaders be used when rendering to an offscreen surface?

I'm considering integrating some D3D code I have with WPF via the new D3DImage as described
here:
My question is this: Do pixel shader's work on offscreen surfaces?
Rendering to an offscreen surface is generally less constrained than rendering directly to a back buffer. The only constraints that come with using an offscreen surface with D3DImage is that it must be in a 32-bit RGB/ARGB format (depending on your platform). Other than that, all that the hardware has to offer is at your disposal.
In fact, tons of shader effects take advantage of offscreen surfaces for multipass, or full screen post-processing.
I don't know if there's anything special about it with WPF, but in general yes, pixel shaders work on offscreen surfaces.
For some effects rendering to a different surface is required - glass refraction in front of a shader-rendered scene for example. Pixel shaders cannot access the current screen contents and so the view has to be first rendered to a buffer and then used as a texture in the refraction shader pass so that it can take the background colour from a pixel other than the one being calculated.

Resources