Sampling unsigned integers from 1D texture using integer texture coordinates - c

I want to pass a big array of unsigned short tuples (rect geometries) to my Fragment Shader and be able to sample them as-is using integer texture coordinates, to do this I'm trying with a 1D texture as follows, but get just blank (0) values.
Texture creation and initialization:
GLushort data[128][2];
// omitted array initialization
GLuint tex;
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_1D, tex);
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MAX_LEVEL, 0);
glTexImage1D(
GL_TEXTURE_1D,
0,
GL_RG16UI,
128,
0,
GL_RG_INTEGER,
GL_UNSIGNED_SHORT,
data
);
Texture passing to shader:
int tex_unit = 0;
glActiveTexture(GL_TEXTURE0 + tex_unit);
glBindTexture(GL_TEXTURE_1D, tex);
glUniform1iv(loc, 1, &tex_unit);
Debug fragment shader:
#version 330 core
out vec4 out_color;
uniform usampler1D tex;
void main()
{
uvec4 rect = texelFetch(tex, 70, 0);
uint w = rect.r;
uint h = rect.g;
out_color = w > 0u ? vec4(1) : vec4(0);
}
Things that work for sure: non-zero data in data array, texture image unit setup and sampler uniform initialization.
OpenGL 4.1, OSX 10.11.6 "El Capitan"

After some googling and by trial and error, I discovered this post, which states that with integer texture formats (for example, GL_RG16UI internal format) GL_LINEAR filtering cannot be specified and only GL_NEAREST applies, thus, everything worked out with the following texture creation code:
GLuint tex;
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_1D, tex);
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MAX_LEVEL, 0);
glTexImage1D(
GL_TEXTURE_1D,
0,
GL_RG16UI,
128,
0,
GL_RG_INTEGER,
GL_UNSIGNED_BYTE,
data
);

Related

glTexImage2D, in random and occasional ways, expends 800 times more in executing

I'm using glsl 2.0 for some GPGPU purposes (I know, not the best for GPGPU).
I have a reduction phase for matrix multiplication in which I have to constantly reduce the texture size (I'm using glTexImage2D). The pseudocode is something like this:
// Start reduction
for (int i = 1; i <= it; i++)
{
glViewport(0, 0, x, y);
glDrawArrays(GL_TRIANGLES, 0, 6);
x = resize(it);
if (i % 2 != 0)
{
glUniform1i(tex2_multiply_initialstep, 4);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer3);
// Resize output texture
glActiveTexture(GL_TEXTURE5);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, x, y, 0, GL_RGBA, GL_FLOAT, NULL);
}
else
{
glUniform1i(tex2_multiply_initialstep, 5);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer2);
// Resize output texture
glActiveTexture(GL_TEXTURE4);
// A LOT OF TIME!!!!!!!
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, x, y, 0, GL_RGBA, GL_FLOAT, NULL);
// A LOT OF TIME!!!!!!!
}
}
In some iterations the glTexImage2D of the else branch takes 800 times more time that in other ones. I make a test hardcoding x and y but surprisingly takes similar high times in the same iterations, so have nothing to do with x value.
What's wrong here? Alternatives to resizing without glTexImage2D?
Thanks.
EDIT:
I know that glsl 2.0 is a bad choice for GPGPU but its mandatory for my project. So that I'm not able to use functions like glTexStorage2D because they are not included in 2.0 subset.
I'm not sure if I understand exactly what you're trying to achieve, but glTexImage2D is reallocating memory each time you call it. You may want to call glTexStorage2D, and then call glTexSubImage2D.
You can check Khronos's Common Mistakes page about that. Relevant part is :
Better code would be to use texture storage functions (if you have
OpenGL 4.2 or ARB_texture_storage) to allocate the texture's storage,
then upload with glTexSubImage2D:
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexStorage2D(GL_TEXTURE_2D, 1, GL_RGBA8, width, height);
glTexSubImage2D(GL_TEXTURE_2D, 0​, 0, 0, width​, height​, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
This creates a texture with a single mipmap level, and sets all of the parameters appropriately. If you wanted to have multiple mipmaps, then you should change the 1 to the number of mipmaps you want. You will also need separate glTexSubImage2D calls to upload each mipmap.
If that is unavailable, you can get a similar effect from this code:
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
Again, if you use more than one mipmaps, you should change the GL_TEXTURE_MAX_LEVEL to state how many you will use (minus 1. The base/max level is a closed range), then perform a glTexImage2D (note the lack of "Sub") for each mipmap.

OpenGL texture not drawing?

I'm trying to draw text using FreeType2 and OpenGL in C, but it's just rendering a square. I've been following tutorials here, here, and here. This code is running on Red Hat 5.6, which only has OpenGL 1.4.
Here's the code I have. First, I load the font with FreeType. I've printed out the buffer for the character I want to the terminal, and it appears to be loading correctly. I'm omitting some error checks for clarity. Also, I've had to manually transcribe this code, so please ignore any syntax errors if they look like an obvious mistype.
FT_Library ft;
FT_Init_Freetype(&ft);
FT_Face face;
FT_New_Face(ft, fontpath, 0, &face);
Ft_Set_Pixel_Sizes(face, 0, 16);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
FT_Load_Char(face, c, FT_LOAD_RENDER);
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D,
0,
GL_RGBA,
face->glyph->bitmap.width,
face->glyph->bitmap.rows,
0,
GL_INTENSITY,
GL_UNSIGNED_BYTE,
face->glyph->bitmap.buffer);
FT_Done_Face(face);
FT_Done_Freetype(ft);
As previously stated, if I loop over the rows and width of the buffer and print the values they look good. I think the call to glTexImage2D is what I want, considering each pixel is a binary intensity value.
The code to draw the symbol is below, but it doesn't seem to work. Instead it just draws a rectangle:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture);
glColor3f(1,1,1);
GLfloat x = 0;
GLfloat y = 0;
GLfloat w = 5;
GLfloat h = 5;
glBegin(GL_QUADS);
glTexCoord2f(0,0);
glVertex2f(x,y);
glTexCoord2f(1,0);
glVertex2f(x+w,y);
glTexCoord2f(1,1);
glVertex2f(x+w,y+h);
glTexCoord2f(0,1);
glVertex2f(x,y+h);
glEnd();
glBindTexture(GL_TEXTURE_2D, 0);
glDisable(GL_TEXTURE_2D);
glDisable(GL_BLEND);
At this point I'm at a loss. I've read the OpenGL documentation and I don't see anything obviously wrong. Can someone tell me what I'm missing?
I found the solution after a lot of trial and error. OpenGL 1.4 requires textures to be a power of two. I allocated a temporary array and used that. Also the GL_INTENSITY setting didn't seem to work so I manually made a RGBA array.

Texturing GL_QUADS with (3x3) color array ignores the 4th color in each row

I'm trying to generate a 3x3 texture from an byte array.
My problem is i need to give one extra color in the final of each row to get what I want, and i want to know why.
Is there a problem in the byte count that I'm ignoring or something like that?
Here's my array:
GLubyte data[33] = {
//bottom row
0, 0, 150,
150, 10, 220,
150, 150, 56,
0, 0, 0, //must be here to get what I want but don't render
//middle row
150, 150, 150,
150, 0, 21,
0, 255, 0,
0, 0, 0, //must be here to get what I want but don't render
//top row
250, 150, 0,
250, 0, 150,
0, 150, 0 };
And my texture generation:
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
//I thing the problem is right here
glTexImage2D(GL_TEXTURE_2D, 0, 3, 3, 3, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
return texture;
And I'm applying the texture like this:
GLuint texture = genTexture(0);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture);
glBegin(GL_QUADS);
glTexCoord2f(1.0f, 0.0f); glVertex2f(0.5f, -0.5f);
glTexCoord2f(1.0f, 1.0f); glVertex2f(0.5f, 0.5f);
glTexCoord2f(0.0f, 1.0f); glVertex2f(-0.5f, 0.5f);
glTexCoord2f(0.0f, 0.0f); glVertex2f(-0.5f, -0.5f);
glEnd();
And the result is ignoring these 24 bits for each row:
I would investigate your environment's value for the GL_UNPACK_ALIGNMENT pixel-store parameter. The manual says:
GL_UNPACK_ALIGNMENT
Specifies the alignment requirements for the start of each pixel row in memory. The allowable values are 1 (byte-alignment), 2 (rows aligned to even-numbered bytes), 4 (word-alignment), and 8 (rows start on double-word boundaries).
That manual page says that the "initial value" is, indeed, 4.
Use a suitable glGetInteger() call to read out the value.
You need to set the GL_UNPACK_ALIGNMENT pixel storage parameter to make this work:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
This controls the alignment of rows in the data passed to glTexImage2D(). The default values is 4. In your case, since each row consists of 9 bytes of data (3 colors with 3 bytes each), an alignment value of 4 would round up the row size to 12. This means that 12 bytes will be read per row, and explains why it works when you add an extra 3 bytes at the end of each row.
With the alignment set to 1, you will not need extra padding.

OpenGL texture rendering

I'm having problem rendering a texture in OpenGL. I'm getting a white color instead of the texture image but my rectangle and the whole context is working fine. I'm using the stbi image loading library and OpenGL 3.3.
Before rendering:
glViewport(0, 0, ScreenWidth, ScreenHeight);
glClearColor(0.2f, 0.0f, 0.0f, 1.0f);
int Buffer;
glGenBuffers(1, &Buffer);
glBindBuffer(GL_ARRAY_BUFFER, Buffer);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
Loading the Texture:
unsigned char *ImageData = stbi_load(file_name, 1024, 1024, 0, 4); //not sure about the "0" parameter
int Texture;
glGenTextures(1, &Texture);
glBindTexture(GL_TEXTURE_2D, Texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, x, y, 0, GL_RGB, GL_UNSIGNED_BYTE, ImageData);
stbi_image_free(ImageData);
Rendering:
glClear(GL_COLOR_BUFFER_BIT);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, Texture);
glEnableVertexAttribArray(0);
glDrawArrays(GL_TRIANGLES, 0, 6 * 3); //a rectangle which is rendered fine
glDisableVertexAttribArray(0);
I am looking for a very minimalistic and simple solution.
First of all, take a second to look at the function declaration for stbi_load (...):
unsigned char *stbi_load (char const *filename,
int *x, // POINTER!!
int *y, // POINTER!!
int *comp, // POINTER!!
int req_comp)
Now, consider the arguments you have passed it:
file_name (presumably a pointer to a C string)
1024 (definitely NOT a pointer)
1024 (definitely NOT a pointer)
0 (definitely NOT a pointer)
The whole point of passing x, y and comp as pointers is so that stbi can update their values after it loads the image. This is the C equivalent of passing by reference in a language like C++ or Java, but you have opted instead to pass integer constants that the compiler then treats as addresses.
Because of the way you call this function, stbi attempts to store the width of the image at the address: 1024, the height of the image at address: 1024 and the number of components at address: 0. You are actually lucky that this does not cause an access violation or other bizarre behavior at run-time, storing things at arbitrary addresses is dangerous.
Instead, consider this:
GLuint ImageX,
ImageY,
ImageComponents;
GLubyte *ImageData = stbi_load(file_name, &ImageX, &ImageY, &ImageComponents, 4);
[...]
glTexImage2D (GL_TEXTURE_2D, 0, GL_RGBA8,
ImageX, ImageY, 0, GL_RGBA, GL_UNSIGNED_BYTE, ImageData);
// ~~~~ You are requesting 4 components per-pixel from stbi, so this is RGBA!
It's been a while since I've done OpenGL but aside from a few arbitrary calls, the main issue I'd guess is that you're not calling glEnable(GL_TEXTURE_2D);before you make the call to draw your function. Afterwards it is wise to call glDisable(GL_TEXTURE_2D);
as well to avoid potential future conflicts.

OpenGL - Frame Buffer Depth Texture differs from Color Depth Texture

I'm doing shadow mapping in OpenGL - as such I've created a frame buffer object where I render the depth of the scene from the view of a light.
glBindRenderbuffer(GL_RENDERBUFFER, color_buffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, color_buffer);
glBindRenderbuffer(GL_RENDERBUFFER, depth_buffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depth_buffer);
glGenTextures(1, &color_texture);
glBindTexture(GL_TEXTURE_2D, color_texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, color_texture, 0);
glGenTextures(1, &depth_texture);
glBindTexture(GL_TEXTURE_2D, depth_texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, width, height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depth_texture, 0);
This then renders the scene from the light perspective as normal, all as you would expect. The only addition is I'm using a custom shader to render the depth of the scene also to the "color_texture".
varying vec4 screen_position;
void main()
{
screen_position = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_Position = screen_position;
}
--------------
varying vec4 screen_position;
void main()
{
float depth = screen_position.z / screen_position.w;
gl_FragColor = vec4(depth, depth, depth, 1.0);
}
I can write these two textures "color_texture" and "depth_texture" to the screen using a full screen quad. Both are indeed depth maps and look correct. Now for the weird bit.
When it comes to actually rendering the shadows on objects sampling the "color_texture" depth works fine, but when I switch to sampling "depth_texture", the depth is different by some scale and some constant.
When I added some fudge factor numbers to this sampled depth I could sort of get it to work, but it was really difficult and it just felt horrible.
I really can't tell what is wrong, technically the two textures should be identical when sampled. I can't carry on using "color_texture" due to the accuracy of RGB. I really need to switch but I can't for the life of me work out why the depth texture gives a different value.
I've programmed shadow mapping several times before and it isn't a concept that is new to me.
Can anyone shed any light on this?
There are some issues with your shader. First, screen_position is not a screen position; that's the clip-space vertex position. Screen space would be relative to your monitor, and therefore would change if you moved the window around. You never get screen-space positions of anything; OpenGL only goes down to window space (relative to the window's location).
Second, this:
float depth = screen_position.z / screen_position.w;
does not compute depth. It computes the normalized-device coordinate (NDC) space Z position, which ranges from [-1, 1]. Depth is a window-space value, which comes after the NDC space value is transformed with the viewport transform (specified by glViewport and glDepthRange). This transform puts the depth on the [0, 1] range.
Most importantly third, you're doing this all manually; let GLSL do it for you:
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
--------------
void main()
{
gl_FragColor = vec4(gl_FragCoord.zzz, 1.0);
}
See? Much better.
Now:
When it comes to actually rendering the shadows on objects sampling the "color_texture" depth works fine, but when I switch to sampling "depth_texture", the depth is different by some scale and some constant.
Of course it is. That's because you believed that the depth was the NDC Z-value. OpenGL writes the actual window-space Z value as the depth. So you need to work with window space. OpenGL is nice enough to provide some uniforms to your fragment shader to make this easier. You can use those uniforms to transform NDC-space Z values to window space Z values.

Resources