OpenGL Error: 1282 when setting a uniform - c

I have been trying to learn OpenGL for a while a create a simple 3d game, but upon trying to set uniforms nothing works anymore. I am using a quite old mac, but I don't think that that has anything to do with it.
This is my code for setting the uniform:
Texture texture = createTexture("./res/images/atlas.png");
bindTexture(&texture, 1);
setUniform1i(&shader, "u_Texture", 1);
The code for setUniform1i is:
void setUniform1i(const Shader *shader, char *name, int i1)
{
int loc = getUniformLocation(shader, name);
bindShader(shader);
glUniform1i(loc, i1);
}
This is my fragment shader:
#version 120
uniform sampler2D u_Texture;
varying vec2 v_texCoor;
void main()
{
vec4 texColor = texture2D(u_Texture, v_texCoor);
gl_FragColor = texColor;
}
One thing to note is that I can set a model view projection matrix uniform in my vertex shader just fine, so I have no idea why setting another uniform would result in an error.

Is your shader binding correctly? I had the same problem but it was resolve when binding it. glUseProgram(program);

Related

shadertoy GLSL - creating a large matrix and displaying it on the screen

I have a palette of 64 colors. I need to create a 512*512 table and write the color indexes in the palette into it, and then display everything on the screen. The problem is that glsl does not support two-dimensional arrays, and it is impossible to save a table between frames
The closest thing you can do is create a separate buffer and only use a part of it.
here's an example buffer A:
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
if(any(greaterThan(fragCoord, vec2(512)))) return;
fragCoord -= .5;
fragColor = vec4(mod(fragCoord.x,2.), 0, 0, 1); // generate a color at point.
}
then in the main shader, you can can access a pixel with:
// vec2 p; // p.x and p.y in range(0, 512)
texture(iChannel0, p/iResolution.xy);
If you are using openGL instead of shadertoy, you can use a texture 2d instead.

gl_Position is not accessible in this profile?

When trying to compile GLSL shaders in C/C++ using GLFW/GLEW I get the following error:
0(12) : error C5052: gl_Position is not accessible in this profile
I followed a tutorial from learnopengl.com. The code runs and displays a empty while square with the above error message being printed to the command line. Any ideas what is happening and how I might fix it?
The fragment shader is:
#version 410
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aColor;
layout (location = 2) in vec2 aTexCoord;
out vec3 ourColor;
out vec2 TexCoord;
void main()
{
gl_Position = vec4(aPos, 1.0);
ourColor = aColor;
TexCoord = aTexCoord;
}
And the vertex shader is:
#version 410
out vec4 FragColor;
in vec3 ourColor;
in vec2 TexCoord;
uniform sampler2D ourTexture;
void main()
{
FragColor = texture(ourTexture, TexCoord);
}
If you would like to see the rest of the code please refer to the tutorial link above.
Looks like you tried to load the fragment shader as the vertex shader and vice versa. gl_Position can only be set from within the vertex shader, since it's a per-vertex attribute. Loading the shaders in correct order should get rid of that problem though.

SceneKit, shader to overlay color based on world coordinates to node based

I have simple shader which allows me to redraw node color based on the local axises of the node (x > 0) -> green, but how to make it works based on the world coordinates.
(possible shader based not by converting some points from scene and passing it to shader)
Shader demo
vec4 pos = u_inverseModelTransform * u_inverseViewTransform * vec4(_surface.position, 1.0);
if (pos.x > 0.0) {
_output.color.rgb = vec3(0.0, 0.8, 0.0);
}
you don't want to multiply by u_inverseModelTransform which moves you back from world space to object space.
vec4 pos = u_inverseViewTransform * vec4(_surface.position, 1.0);
As a corollary to ‘mnuages’ correct answer, I finally figured out that if you’re in MetalSL and not GLSL you should use scn_frame.inverseViewTransform not u_inverseViewTransform.
This is tricky because SceneKit will automatically try to cross-compile GLSL shaderModifiers into MetalSL so it’s hard to know which one you’re using sometimes. (E.g., you can have your SceneKit view be Metal-backed instead of OpenGL- or GLES-backed and still write your shader modifiers in GLSL and SceneKit will work.)

glGetUniform returns -1 for active uniform

I have this code calling glGetUniform location but it's returning -1 even though I'm using the uniform in my vertex shader. I don't get any errors from glGetError or glGetProgramInfoLog or glGetShaderInfoLog and the shaders/program all gets created successfully. I also only call this after it gets compiled and linked.
int projectionUniform = glGetUniformLocation( shaderProgram, "dfProjection" );
#version 410
uniform float dfRealTime;
uniform float dfGameTime;
uniform mat4 dfProjection;
uniform mat4 dfModelView;
layout(location = 0) in vec3 vertPosition;
layout(location = 1) in vec4 vertColor;
smooth out vec4 color;
out vec4 position;
void main() {
color = vertColor;
position = (dfModelView * dfProjection) * vec4(vertPosition, 1.0);
}
This is the fragment shader:
smooth in vec4 color;
out vec4 fragColor;
void main() {
fragColor = color;
}
There are three positibilites:
You have mis-spelled dfProjection in glGetUniformLocation, but it doesn't seem so.
You are not binding the correct program before calling glGetUniformLocation using glUseProgram.
Or you are not using position in your fragment shader, which means dfProjection is not really active.
Another thing from the code it seems you are passing the shader handle to glGetUniformLocation you should pass the linked program handle instead.
After your edit you are not using position in your fragment shader,
smooth in vec4 color;
out vec4 fragColor;
in vec4 position;
void main() {
// do sth with position here
fragColor = color*position;
}
Keep in mind that you still need to use gl_Position in-order for the fragment shader to know the final fragment position. But I was answering the question of why a uniform variable is not being detected.

How to correctly make a depth cubemap for shadow mapping?

I have written code to render my scene objects to a cubemap texture of format GL_DEPTH_COMPONENT and then use this texture in a shader to determine whether a fragment is being directly lit or not, for shadowing purposes. However, my cubemap appears to come out as black. I suppose I am not setting up my FBO or rendering context sufficiently, but fail to see what is missing.
Using GL 3.3 in compatibility profile.
This is my code for creating the FBO and cubemap texture:
glGenFramebuffers(1, &fboShadow);
glGenTextures(1, &texShadow);
glBindTexture(GL_TEXTURE_CUBE_MAP, texShadow);
for (int sideId = 0; sideId < 6; sideId++) {
// Make sure GL knows what this is going to be.
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + sideId, 0, GL_DEPTH_COMPONENT, 512, 512, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
}
// Don't interpolate depth value sampling. Between occluder and occludee there will
// be an instant jump in depth value, not a linear transition.
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glBindTexture(GL_TEXTURE_CUBE_MAP, 0);
My full rendering function then looks like so:
void render() {
// --- MAKE DEPTH CUBEMAP ---
// Set shader program for depth testing
glUseProgram(progShadow);
// Get the light for which we want to generate a depth cubemap
PointLight p = pointLights.at(0);
// Bind our framebuffer for drawing; clean it up
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboShadow);
glClear(GL_DEPTH_BUFFER_BIT);
// Make 1:1-ratio, 90-degree view frustum for a 512x512 texture.
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(90.0, 1, 16.0, 16384.0);
glViewport(0, 0, 512, 512);
glMatrixMode(GL_MODELVIEW);
// Set modelview and projection matrix uniforms
setShadowUniforms();
// Need 6 renderpasses to complete each side of the cubemap
for (int sideId = 0; sideId < 6; sideId++) {
// Attach depth attachment of current framebuffer to level 0 of currently relevant target of texShadow cubemap texture.
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_CUBE_MAP_POSITIVE_X + sideId, texShadow, 0);
// All is fine.
GLenum status = glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER);
if (status != GL_FRAMEBUFFER_COMPLETE) {
std::cout << "Shadow FBO is broken with code " << status << std::endl;
}
// Push modelview matrix stack because we need to rotate and move camera every time
glPushMatrix();
// This does a switch-case with glRotatefs
rotateCameraForSide(GL_TEXTURE_CUBE_MAP_POSITIVE_X + sideId);
// Render from light's position.
glTranslatef(-p.getX(), -p.getY(), -p.getZ());
// Render all objects.
for (ObjectList::iterator it = objectList.begin(); it != objectList.end(); it++) {
(*it)->render();
}
glPopMatrix();
}
// --- RENDER SCENE ---
// Bind default framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// Setup proper projection matrix with 70 degree vertical FOV and ratio according to window frame dimensions.
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(70.0, ((float)vpWidth) / ((float)vpHeight), 16.0, 16384.0);
glViewport(0, 0, vpWidth, vpHeight);
glUseProgram(prog);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
applyCameraPerspective();
// My PointLight class has both a position (world space) and renderPosition (camera space) Vec3f variable;
// The lights' renderPositions get transformed with the modelview matrix by this.
updateLights();
// And here, among other things, the lights' camera space coordinates go to the shader.
setUniforms();
// Render all objects
for (ObjectList::iterator it = objectList.begin(); it != objectList.end(); it++) {
// Object texture goes to texture unit 0
GLuint usedTexture = glTextureList.find((*it)->getTextureName())->second;
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, usedTexture);
glUniform1i(textureLoc, 0);
// Cubemap goes to texture unit 1
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_CUBE_MAP, texShadow);
glUniform1i(shadowLoc, 1);
(*it)->render();
}
glPopMatrix();
frameCount++;
}
The shader program for rendering depth values ("progShadow") is simple.
Vertex shader:
#version 330
in vec3 position;
uniform mat4 modelViewMatrix, projectionMatrix;
void main() {
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1);
}
Fragment shader:
#version 330
void main() {
// OpenGL sets the depth anyway. Nothing to do here.
}
The shader program for final rendering ("prog") has a fragment shader which looks something like this:
#version 330
#define MAX_LIGHTS 8
in vec3 fragPosition;
in vec3 fragNormal;
in vec2 fragTexCoordinates;
out vec4 fragColor;
uniform sampler2D colorTexture;
uniform samplerCubeShadow shadowCube;
uniform uint activeLightCount;
struct Light {
vec3 position;
vec3 diffuse;
float cAtt;
float lAtt;
float qAtt;
};
// Index 0 to (activeLightCount - 1) need to be the active lights.
uniform Light lights[MAX_LIGHTS];
void main() {
vec3 lightColor = vec3(0, 0, 0);
vec3 normalFragmentToLight[MAX_LIGHTS];
float distFragmentToLight[MAX_LIGHTS];
float distEyeToFragment = length(fragPosition);
// Accumulate all light in "lightColor" variable
for (uint i = uint(0); i < activeLightCount; i++) {
normalFragmentToLight[i] = normalize(lights[i].position - fragPosition);
distFragmentToLight[i] = distance(fragPosition, lights[i].position);
float attenuation = (lights[i].cAtt
+ lights[i].lAtt * distFragmentToLight[i]
+ lights[i].qAtt * pow(distFragmentToLight[i], 2.0));
float dotProduct = dot(fragNormal, normalFragmentToLight[i]);
lightColor += lights[i].diffuse * max(dotProduct, 0.0) / attenuation;
}
// Shadow mapping only for light at index 0 for now.
float distOccluderToLight = texture(shadowCube, vec4(normalFragmentToLight[0], 1));
// My geometries use inches as units, hence a large bias of 1
bool isLit = (distOccluderToLight + 1) < distFragmentToLight[0];
fragColor = texture2D(colorTexture, fragTexCoordinates) * vec4(lightColor, 1.0f) * int(isLit);
}
I have verified that all uniform location variables are set to a proper value (i.e. not -1).
It might be worth noting I do no call to glBindFragDataLocation() for "progShadow" prior to linking it, because no color value should be written by that shader.
See anything obviously wrong here?
For shadow maps, depth buffer internal format is pretty important (too small and things look awful, too large and you eat memory bandwidth). You should use a sized format (e.g. GL_DEPTH_COMPONENT24) to guarantee a certain size, otherwise the implementation will pick whatever it wants. As for debugging a cubemap shadow map, the easiest thing to do is actually to draw the scene into each cube face and output color instead of depth. Then, where you currently try to use the cubemap to sample depth, write the sampled color to fragColor instead. You can rule out view issues immediately this way.
There is another much more serious issue, however. You are using samplerCubeShadow, but you have not set GL_TEXTURE_COMPARE_MODE for your cube map. Attempting to sample from a depth texture with this sampler type and without GL_TEXTURE_COMPARE_MODE = GL_COMPARE_REF_TO_TEXTURE will produce undefined results. Even if you did have this mode set properly, the 4th component of the texture coordinates are used as the depth comparison reference -- a constant value of 1.0 is NOT what you want.
Likewise, the depth buffer does not store linear distance, you cannot directly compare the distance you computed here:
distFragmentToLight[i] = distance(fragPosition, lights[i].position);
Instead, something like this will be necessary:
float VectorToDepth (vec3 Vec)
{
vec3 AbsVec = abs(Vec);
float LocalZcomp = max(AbsVec.x, max(AbsVec.y, AbsVec.z));
// Replace f and n with the far and near plane values you used when
// you drew your cube map.
const float f = 2048.0;
const float n = 1.0;
float NormZComp = (f+n) / (f-n) - (2*f*n)/(f-n)/LocalZcomp;
return (NormZComp + 1.0) * 0.5;
}
float LightDepth = VectorToDepth (fragPosition - lights [i].position);
float depth_compare = texture(shadowCube,vec4(normalFragmentToLight[0],LightDepth));
* Code for float VectorToDepth (vec3 Vec)borrowed from Omnidirectional shadow mapping with depth cubemap
Now depth_compare will be a value between 0.0 (completely in shadow) and 1.0 (completely out of shadow). If you have linear texture filtering enabled, the hardware will sample the depth at 4 points and may give you a form of 2x2 PCF filtering. If you have nearest texture filtering, then it will either be 1.0 or 0.0.

Resources