Can a GLSL fragment shader run without a framebuffer and similar inconveniences? - c

Reapeating the above: Can a GLSL fragment shader run without a framebuffer and any rasterization stage?
This perfect answer gives an insight about where to start with SSBO's. The answer has a link to OpenGL ARB extension that has a boilerplate code. The code works for me if made with some changes to work with OpenGL compute programs. But, I really does not get it, how to do with a fragment program? And without any other buffers than SSBO.
The code clearly has fragment source code without any pixel operations, only SSBO ones.
in vec4 color;
void main()
{
uint fragmentNumber = atomicCounterIncrement(fragmentCounter);
if (fragmentNumber < maxFragmentCount) {
fragments[fragmentNumber].position = ivec2(gl_FragCoord.xy);
fragments[fragmentNumber].color = color;
}
}
And later in the C program file:
// Generate, bind, and specify the data store for the atomic counter.
glGenBuffers(1, &counterBuffer);
glBindBufferBase(GL_ATOMIC_COUNTER_BUFFER, 0, counterBuffer);
glBufferData(GL_ATOMIC_COUNTER_BUFFER, sizeof(GLuint), NULL,
GL_DYNAMIC_DRAW);
// Reset the atomic counter to zero, then draw stuff. This will record
// values into the shader storage buffer as fragments are generated.
GLuint zero = 0;
glBufferSubData(GL_ATOMIC_COUNTER_BUFFER, 0, sizeof(GLuint), &zero);
glUseProgram(program);
glDrawElements(GL_TRIANGLES, ...);
As per my setup, I do not have any output with the means of OpenGL pixels. I wish it to stay so. Is it possible, or am I missing something?
P.S The above setup gives me error invalid framebuffer operation after glDrawElements immediately followed by glFinish.
Update 21.03.2021
There is a Framebuffers with no attachments. The only thing you should set in its state is its width and height. And that is somewhat at the course that anyone's heading, if one wish to minimize setup.
The minus of the aformentioned, is that it is still requires some geometry to be fed to rasterization stage. To start the shader stages, you know. But, as a plus, one gets geometry rasterization, wish it or not.
If I have time, I leave some code as a reminder for miself.

Can a GLSL fragment shader run without a framebuffer and similar inconveniences?
No. The fragment shaders need the step that invokes them. The stage that produce fragments called rasterization.
From the khronos wiki:
A Fragment Shader is the Shader stage that
will process a Fragment generated by the Rasterization
into a set of colors and a single depth value.
The fragment shader is the OpenGL pipeline stage after a primitive is rasterized.
And the rasterization needs a render step to produce fragments. The rendering is done to somewhere.
In OpenGL, it is done to framebuffer. So without a framebuffer, you can not render, hence OpenGL
can not produce fragments.
Setup of a framebuffer can be minimized by
Framebuffers with no attachments.
But one needs to supply geometry and render it, to invoke fragment shaders.
Fragment shaders can read and write to arbitrary SSBO. But the usage is not similar to compute shaders.
Fragment shaders invoke on each produced fragment, and compute shaders can be invoked, as I may say, arbitrary.
Many thanks to all commenters who had pointed me to the obvious, by now, reason why the fragment shaders need a render operation.

Related

How to read back data from fragment shader

I am trying to find out how i can read back data from fragment shader to application but specifically fragment shader in built variables
Here is an example of what i am trying to do, in this fragment shader below i am trying to read back to the application the fragment shader variable gl_FragCoord.y
#version 330
out vec4 outputColor;
void main()
{
gl_FragCoord.y??????
outputColor = vec4(0.2f, 0.2f, 0.2f, 1.0f);
}
How would i pass this value back to the application, is it done through uniform variable? buffer object? or some other way? or can it be done at all?
i am specifically trying to get the gl_FragCoord.y value for each time the fragment shader runs
Okay i have found a few ways to do this in case anybody needs to know and since nobody provided the answer i searched for it myself and found it(besides the one that was provided by the person who commented)
First way was mentioned by one of the commentators to the question, which is to create a framebuffer/renderbuffer object as an image, write data to it directly from the shader and read back the data as an image or to read back in general from framebuffer using glReadPixels() to currently bound framebuffer object
Second way is to create buffer object/array in fragment shader to write from the variable mentioned above and to map the buffer data using glMapBuffer*() and get a pointer to the buffer and read back the value in application
Another was to also create a buffer object and then use glGetBufferSubData to read back everything that is written to it from shader (obviously you declare the buffer object as read/write also)
And finally you can use image units texture data to also store from shader to these buffer objects and read back from them , if anyone needs exact details leave a comment and i will provide
See it IS possible and more then one way to do it, however it wont be run for each time the fragment shader runs, but rather the fragment shader will store the information to buffer object and those aforementioned and read back to application after execution , however the result will still be the same, in that the gl_FragCoord.y for each fragment will be stored and accessed as a chunk of memory

Opengl - appending to a texture

I want to create a texture system where I add to a texture, not overwrite it. My texture has integer values (32 bit). What I want: Ex. I have an integer pixel with bits 100, I want to add 10 to it so it becomes 110.
My current implementation has two textures, one with the previous texture, and a texture to write on. The previous texture's values are read and then rewritten with the new data. Is there a better method to do so because using two textures feel very inefficient?
Depending on what you mean by "appending", you could use additive blending:
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_ONE, GL_ONE);
then, the routput of your fragment shader will by added to the current contents of the color buffer. If you use a FBO to render into the texture, you can directly add to this texture.
You should just be careful to not create any feedback loops, so your fragment shader's result should not depend on any sample of the very same texture you render to.
UPDATE
As noted in the comment, the texture in question has GL_RED_INTEGER format. Unfortunately, the blending is only applied on floating-point color buffers (including normalized integers), and never on unnormalized integers.
However, there is another potential approach. The rules for the "feedback loops" I mentioned before have been relaxed with recent OpenGL. The extension GL_ARB_texture_barrier explicitely allowes a fragment shader to read pixels from the same texture it is writing to:
Specifically, the values of rendered fragments are undefined if any
shader stage fetches texels and the same texels are written via fragment
shader outputs, even if the reads and writes are not in the same Draw
call, unless any of the following exceptions apply:
The reads and writes are from/to disjoint sets of texels (after
accounting for texture filtering rules).
There is only a single read and write of each texel, and the read is in
the fragment shader invocation that writes the same texel (e.g. using
"texelFetch2D(sampler, ivec2(gl_FragCoord.xy), 0);").
[...]
This extension has been promoted to a core feature of OpenGL 4.5. This is quite new and not available on a lot of platforms, so it is unclear if you can use it...

What's the difference between TMU and openGL's GL_TEXTUREn?

I can't quite understand what's the difference.
I know TMU is a texture mapping unit on GPU, and in opengl, we can have many texture units.I used to think they're the same, that if I got n TMU, then I can have n GL_TEXTURE to use, but I found that this may not be true.
Recently, I was working on an android game, targetting a platform using the Mali 400MP GPU.According to the document, it has only one TMU, I thought that I can use only one texture at a time.But suprisingly, I can use at least 4 textures without trouble.Why is this?
Is the hardware or driver level doing something like swap different textures in/out automatically for me? If so, is it supposed to cause a lot of cache miss?
I'm not the ultimate hardware architecture expert, particularly not for Mali. But I'll give it a shot anyway, based on my understanding.
The TMU is a hardware unit for texture sampling. It does not get assigned to a OpenGL texture unit on a permanent basis. Any time a shader executes a texture sampling operation, I expect this specific operation to be assigned to one of the TMUs. The TMU then does the requested sampling, delivers the result back to the shader, and is available for the next sampling operation.
So there is no relationship between the number of TMUs and the number of supported OpenGL texture units. The number of OpenGL texture units that can be supported is determined by the state tracking part of the hardware.
The number of TMUs has an effect on performance. The more TMUs are available, the more texture sampling operations can be executed within a given time. So if you use a lot of texture sampling in your shaders, your code will profit from having more TMUs. It doesn't matter if you sample many times from the same texture, or from many different textures.
Texture Mapping Units (TMUs) are functional units on the hardware, once upon a time they were directly related to the number of pixel pipelines. As hardware is much more abstract/general purpose now, it is not a good measure of how many textures can be applied in a single pass anymore. It may give an indication of overall multi-texture performance, but by itself does not impose any limits.
OpenGL's GL_TEXTURE0+n actually represents Texture Image Units (TIUs), which are locations where you bind a texture. The number of textures you can apply simultaneously (in a single execution of a shader) varies per-shader stage. In Desktop GL, which has 5 stages as of GL 4.4, implementations must support 16 unique textures per-stage. This is why the number of Texture Image Units is 80 (16x5). GL 3.3 only has 3 stages, and its minimum TIU count is thus only 48. This gives you enough binding locations to provide a set of 16 unique textures for every stage in your GLSL program.
GL ES, particularly 2.0, is a completely different story. It mandates support for at least 8 simultaneous textures in the fragment shader stage and 0 (optional) in the vertex shader.
const mediump int gl_MaxVertexTextureImageUnits = 0; // Vertex Shader Limit
const mediump int gl_MaxTextureImageUnits = 8; // Fragment Shader Limit
const mediump int gl_MaxCombinedTextureImageUnits = 8; // Total Limit for Entire Program
There is also a limit on the number of textures you can apply across all of the shaders in a single execution of your program (gl_MaxCombinedTextureImageUnits), and this limit is usually just the sum total of the limits for each individual stage.

WebGL fragment shader fails to link when adding additional function call

I have a WebGL fragment shader that I am using to do raytracing. I pass in sphere and triangle data using textures. So far I've got 2 spheres and 3 triangles working. When I add the call to check interesection with a 4th triangle, the shader does not link, and calling getProgramInfoLog() just returns null.
Could the fragment shader be getting too big? Or do I need to look for another cause? How do I determine where the problem might be?
Here is a code snippet, commenting out any one of the checkTriangleIntersection calls causes the shader to link successfully.
checkTriangleIntersection(0.0, rayOrigin, rayDir, piOfNearest, normalOfNearest, colourOfNearest, distOfNearest);
checkTriangleIntersection(1.0, rayOrigin, rayDir, piOfNearest, normalOfNearest, colourOfNearest, distOfNearest);
checkTriangleIntersection(2.0, rayOrigin, rayDir, piOfNearest, normalOfNearest, colourOfNearest, distOfNearest);
//checkTriangleIntersection(3.0, rayOrigin, rayDir, piOfNearest, normalOfNearest, colourOfNearest, distOfNearest);
Since all the calls are the same, except for the index, I thought that there would be nothing wrong with the code itself, but is there some kind of limit that I could be running up against?
I'm getting more then 30 FPS before I add the extra function call, and even when I do add the extra call, both the vertex and fragemnt shader compile OK.
I got rid of the issue by putting the calls to checkTriangleIntersection() into a for loop.

Drawing per-pixel into a backbuffer or texture to display to screen, using opengl - no glDrawPixels()

Basically, I have an array of data (fluid simulation data) which is generated per-frame in real-time from user input (starts in system ram). I want to write the density of the fluid to a texture as an alpha value - I interpolate the array values to result in an array the size of the screen (the grid is relatively small) and map it to a 0 - 255 range. What is the most efficient way (ogl function) to write these values into a texture for use?
Things that have been suggested elsewhere, which I don't think I want to use (please, let me know if I've got it wrong):
glDrawPixels() - I'm under the impression that this will cause an interrupt each time I call it, which would make it slow, particularly at high resolutions.
Use a shader - I don't think that a shader can accept and process the volume of data in the array each frame (It was mentioned elsewhere that the cap on the amount of data they may accept is too low)
If I understand your problem correctly, both solutions are over-complicating the issue. Am I correct in thinking you've already generated an array of size x*y where x and y are your screen resolution, filled with unsigned bytes ?
If so, if you want an OpenGL texture that uses this data as its alpha channel, why not just create a texture, bind it to GL_TEXTURE_2D and call glTexImage2D with your data, using GL_ALPHA as the format and internal format, GL_UNSIGNED_BYTE as the type and (x,y) as the size ?
What makes you think a shader would perfom bad? The whole idea of shaders is about processing huge amounts of data very, very fast. Please use Google on the search phrase "General Purpose GPU computing" or "GPGPU".
Shaders can only gather data from buffers, not scatter. But what they can do is change values in the buffers. This allows for a (fragment) shader to write the locations of *GL_POINT*s, which are then in turn placed on the target pixels of the texture. Shader Model 3 and later GPUs can also access texture samplers from the geometry and vertex shader stages, so the fragment shader part gets really simple then.
If you just have a linear stream of positions and values, just send those to OpenGL through a Vertex Array, drawing *GL_POINT*s, with your target texture being a color attachment for a framebuffer object.
What is the most efficient way (ogl function) to write these values into a texture for use?
A good way would be to try to avoid any unnecessary extra copies. So you could use Pixel Buffer Objects which you map to your address space, and use that to directly generate your data into.
Since you want to update this data per frame, you also want to look for efficient buffer object streaming, so that you don't force implicit synchronizations between the CPU and GPU. An easy way to do that in your scenario would be using a ring buffer of 3 PBOs, which you advance every frame.
Things that have been suggested elsewhere, which I don't think I want to use (please, let me know if I've got it wrong):
glDrawPixels() - I'm under the impression that this will cause an interrupt each time I call it, which would make it slow, particularly at high resolutions.
Well, what the driver does is totally implementation-specific. I don't think that the "cause an interrupt each time" is a useful mental image here. You seem to completely underestimate the work the GL implementation will be doing behind your back. A GL call will not correspond to some command which is sent to the GPU.
But not using glDrawPixels is still a good choice. It is not very efficient, and it has been deprecated and removed from modern GL.
Use a shader - I don't think that a shader can accept and process the volume of data in the array each frame (It was mentioned elsewhere that the cap on the amount of data they may accept is too low)
You got this totally wrong. There is no way to not use a shader. If you're not writing one yourself (e.g. by using old "fixed-function pipeline" of the GL), the GPU driver will provide the shader for you. The hardware implementation for these earlier fixed function stages has been completely superseeded by programmable units - so if you can't do it with shaders, you can't do it with the GPU. And I would strongly recommend to write your own shader (it is the only option in modern GL, anyway).

Resources