WebGL fragment shader fails to link when adding additional function call - linker

I have a WebGL fragment shader that I am using to do raytracing. I pass in sphere and triangle data using textures. So far I've got 2 spheres and 3 triangles working. When I add the call to check interesection with a 4th triangle, the shader does not link, and calling getProgramInfoLog() just returns null.
Could the fragment shader be getting too big? Or do I need to look for another cause? How do I determine where the problem might be?
Here is a code snippet, commenting out any one of the checkTriangleIntersection calls causes the shader to link successfully.
checkTriangleIntersection(0.0, rayOrigin, rayDir, piOfNearest, normalOfNearest, colourOfNearest, distOfNearest);
checkTriangleIntersection(1.0, rayOrigin, rayDir, piOfNearest, normalOfNearest, colourOfNearest, distOfNearest);
checkTriangleIntersection(2.0, rayOrigin, rayDir, piOfNearest, normalOfNearest, colourOfNearest, distOfNearest);
//checkTriangleIntersection(3.0, rayOrigin, rayDir, piOfNearest, normalOfNearest, colourOfNearest, distOfNearest);
Since all the calls are the same, except for the index, I thought that there would be nothing wrong with the code itself, but is there some kind of limit that I could be running up against?
I'm getting more then 30 FPS before I add the extra function call, and even when I do add the extra call, both the vertex and fragemnt shader compile OK.

I got rid of the issue by putting the calls to checkTriangleIntersection() into a for loop.

Related

Can a GLSL fragment shader run without a framebuffer and similar inconveniences?

Reapeating the above: Can a GLSL fragment shader run without a framebuffer and any rasterization stage?
This perfect answer gives an insight about where to start with SSBO's. The answer has a link to OpenGL ARB extension that has a boilerplate code. The code works for me if made with some changes to work with OpenGL compute programs. But, I really does not get it, how to do with a fragment program? And without any other buffers than SSBO.
The code clearly has fragment source code without any pixel operations, only SSBO ones.
in vec4 color;
void main()
{
uint fragmentNumber = atomicCounterIncrement(fragmentCounter);
if (fragmentNumber < maxFragmentCount) {
fragments[fragmentNumber].position = ivec2(gl_FragCoord.xy);
fragments[fragmentNumber].color = color;
}
}
And later in the C program file:
// Generate, bind, and specify the data store for the atomic counter.
glGenBuffers(1, &counterBuffer);
glBindBufferBase(GL_ATOMIC_COUNTER_BUFFER, 0, counterBuffer);
glBufferData(GL_ATOMIC_COUNTER_BUFFER, sizeof(GLuint), NULL,
GL_DYNAMIC_DRAW);
// Reset the atomic counter to zero, then draw stuff. This will record
// values into the shader storage buffer as fragments are generated.
GLuint zero = 0;
glBufferSubData(GL_ATOMIC_COUNTER_BUFFER, 0, sizeof(GLuint), &zero);
glUseProgram(program);
glDrawElements(GL_TRIANGLES, ...);
As per my setup, I do not have any output with the means of OpenGL pixels. I wish it to stay so. Is it possible, or am I missing something?
P.S The above setup gives me error invalid framebuffer operation after glDrawElements immediately followed by glFinish.
Update 21.03.2021
There is a Framebuffers with no attachments. The only thing you should set in its state is its width and height. And that is somewhat at the course that anyone's heading, if one wish to minimize setup.
The minus of the aformentioned, is that it is still requires some geometry to be fed to rasterization stage. To start the shader stages, you know. But, as a plus, one gets geometry rasterization, wish it or not.
If I have time, I leave some code as a reminder for miself.
Can a GLSL fragment shader run without a framebuffer and similar inconveniences?
No. The fragment shaders need the step that invokes them. The stage that produce fragments called rasterization.
From the khronos wiki:
A Fragment Shader is the Shader stage that
will process a Fragment generated by the Rasterization
into a set of colors and a single depth value.
The fragment shader is the OpenGL pipeline stage after a primitive is rasterized.
And the rasterization needs a render step to produce fragments. The rendering is done to somewhere.
In OpenGL, it is done to framebuffer. So without a framebuffer, you can not render, hence OpenGL
can not produce fragments.
Setup of a framebuffer can be minimized by
Framebuffers with no attachments.
But one needs to supply geometry and render it, to invoke fragment shaders.
Fragment shaders can read and write to arbitrary SSBO. But the usage is not similar to compute shaders.
Fragment shaders invoke on each produced fragment, and compute shaders can be invoked, as I may say, arbitrary.
Many thanks to all commenters who had pointed me to the obvious, by now, reason why the fragment shaders need a render operation.

calculate char's bbox with mupdf fz_stext_char_bbox

I occur an strange phenomenon using mupdf, which when I use fz_stext_char_bbox to get char's bbox, a lot of bbox are correct, but a little are wrong. I draw the bbox with red rectange on screen. Following picture is my capture:
these wrong bbox are always 0 on lefttop. In my code,
I get the fz_page by pageNo first.
then I use fz_new_stext_sheet, fz_new_stext_page and fz_new_stext_device create necessary parameter.
I use fz_run_page with current matrix and above parameters.
then I following the mupdf to use fz_stext_char_bbox in order to get the bbox.
I'm not sure it's necessory to list my code here, becaause it maybe too long. I seems my transform matrix is right but don't know why has wrong bbox. Do I forget something?

Camera will not move in OpenGL

I am having problems with using a camera in OpenGL/freeGLUT. Here is my code:
http://pastebin.com/VCi3Bjq5
(For some reason, when I paste the code into the code feature on this site, it gives extremely weird output.)
As far as I can tell this should rotate the camera when arrow keys are pressed - but it does nothing. It also seems that even the initial camera position is wrong. Any clue why this is?
The display function is only called once. You need to either set an idle function with glutIdleFunc() or tell GLUT that the display function must be called again with glutPostRedisplay().

What is the use of glMatrixMode() and glLoadIdentity() in the OpenGL ES BlackBerry 10 2d walkthrough

In the walk through for blackberry 10 sdk using opengl es. it uses 2 commands namely:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
and later:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
I don't understand what these are used for when initializing the viewport. If I take those lines out the program still runs perfectly and nothing changes.
I see its got to do with rendering the matrix but i'm not sure I understand which matrix as this is only when im initializing before any sort of rendering.
Called in an initialization routine, those do nothing. The default value of both matrices is identity, so it's just setting it to the same value that they already are.
As to why it is there, I guess that some people just like to explicitly setup their context so they know for sure what the current value is, maybe it's easier to remember or they don't trust the context to have the right default value, I don't know.

Compiling specific code NULLs my textures

A very strange error: if I add some specific code to my project, any textures I use contain nothing but 0. Even when I'm not running any of the code that was added.
The specific code here is the kernels of an nVidia CUDA sample [1], the Bicubic Texture Filtering sample, in specific the CatMulRom kernel. I've traced it down to one of the subfunctions. If I reset a variable there, everything returns to normal. It's really, really strange and I have no idea anymore what it could be. Adding and using the bicubic kernel causes no problems.
Here's the change that "fixes" the problem:
__host__ __device__
float catrom_w1(float a)
{
a = 1; // Fix
return 1.0f + a*a*(-2.5f + 1.5f*a);
}
If I reset the variable, it works if I'm not using CatMulRom. If I do try to use it the textures are zero again. The textures in question are defined as follows:
texture<uchar1, 2, cudaReadModeNormalizedFloat> tex;
I've edited away the template, hoping it would solve the problem, but it persists.
[1] http://developer.download.nvidia.com/compute/cuda/sdk/website/samples.html
You've busted your stack.

Resources