glGenTextures(1, &bigTex);
glBindTexture(GL_TEXTURE_2D, bigTex);
u32 mipmapLevel = max(0, (int)greaterOrEqualPowerOfTwoExponent(max(packing.bounds.x, packing.bounds.y)) - 4);
glTexImage2D(GL_TEXTURE_2D, mipmapLevel, GL_ALPHA, texSpan.x, texSpan.y, 0, GL_ALPHA, GL_UNSIGNED_BYTE, dat);
glGenerateMipmap(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
This seems to be producing blank (and opaque) textures. Disabling mipmaps by setting mipmapLevel to zero causes it to render as normal, but of course, then there is no mipmapping.
the mipmap level argument of glImage2d doesn't mean what you think it means. It specifies the mipmap level of dat, not the desired lowest mipmap level associated with bigTex, so what you're saying there is that you want to define the fifth mipmap level of bigTex. You never see the fifth mipmap level of bigTex, so everything just looks blank- because you never defined the higher mipmap levels.
Since dat is the most detailed level (0), that's the level you define. So get rid of mipmapLevel, and do
glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, texSpan.x, texSpan.y, 0, GL_ALPHA, GL_UNSIGNED_BYTE, dat);
You've seen that this works, but you think it's disabling mipmapping because the results of downscaling don't look good. Mipmaps should be in effect, at that point. Set your GL_TEXTURE_MIN_FILTER to GL_LINEAR_MIPMAP_NEAREST instead of GL_LINEAR, and throw in a glHint(GL_GENERATE_MIPMAP_HINT, GL_NICEST); for good measure, and you will see that.
Related
I'm using glsl 2.0 for some GPGPU purposes (I know, not the best for GPGPU).
I have a reduction phase for matrix multiplication in which I have to constantly reduce the texture size (I'm using glTexImage2D). The pseudocode is something like this:
// Start reduction
for (int i = 1; i <= it; i++)
{
glViewport(0, 0, x, y);
glDrawArrays(GL_TRIANGLES, 0, 6);
x = resize(it);
if (i % 2 != 0)
{
glUniform1i(tex2_multiply_initialstep, 4);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer3);
// Resize output texture
glActiveTexture(GL_TEXTURE5);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, x, y, 0, GL_RGBA, GL_FLOAT, NULL);
}
else
{
glUniform1i(tex2_multiply_initialstep, 5);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer2);
// Resize output texture
glActiveTexture(GL_TEXTURE4);
// A LOT OF TIME!!!!!!!
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, x, y, 0, GL_RGBA, GL_FLOAT, NULL);
// A LOT OF TIME!!!!!!!
}
}
In some iterations the glTexImage2D of the else branch takes 800 times more time that in other ones. I make a test hardcoding x and y but surprisingly takes similar high times in the same iterations, so have nothing to do with x value.
What's wrong here? Alternatives to resizing without glTexImage2D?
Thanks.
EDIT:
I know that glsl 2.0 is a bad choice for GPGPU but its mandatory for my project. So that I'm not able to use functions like glTexStorage2D because they are not included in 2.0 subset.
I'm not sure if I understand exactly what you're trying to achieve, but glTexImage2D is reallocating memory each time you call it. You may want to call glTexStorage2D, and then call glTexSubImage2D.
You can check Khronos's Common Mistakes page about that. Relevant part is :
Better code would be to use texture storage functions (if you have
OpenGL 4.2 or ARB_texture_storage) to allocate the texture's storage,
then upload with glTexSubImage2D:
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexStorage2D(GL_TEXTURE_2D, 1, GL_RGBA8, width, height);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
This creates a texture with a single mipmap level, and sets all of the parameters appropriately. If you wanted to have multiple mipmaps, then you should change the 1 to the number of mipmaps you want. You will also need separate glTexSubImage2D calls to upload each mipmap.
If that is unavailable, you can get a similar effect from this code:
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
Again, if you use more than one mipmaps, you should change the GL_TEXTURE_MAX_LEVEL to state how many you will use (minus 1. The base/max level is a closed range), then perform a glTexImage2D (note the lack of "Sub") for each mipmap.
I'm trying to draw text using FreeType2 and OpenGL in C, but it's just rendering a square. I've been following tutorials here, here, and here. This code is running on Red Hat 5.6, which only has OpenGL 1.4.
Here's the code I have. First, I load the font with FreeType. I've printed out the buffer for the character I want to the terminal, and it appears to be loading correctly. I'm omitting some error checks for clarity. Also, I've had to manually transcribe this code, so please ignore any syntax errors if they look like an obvious mistype.
FT_Library ft;
FT_Init_Freetype(&ft);
FT_Face face;
FT_New_Face(ft, fontpath, 0, &face);
Ft_Set_Pixel_Sizes(face, 0, 16);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
FT_Load_Char(face, c, FT_LOAD_RENDER);
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D,
0,
GL_RGBA,
face->glyph->bitmap.width,
face->glyph->bitmap.rows,
0,
GL_INTENSITY,
GL_UNSIGNED_BYTE,
face->glyph->bitmap.buffer);
FT_Done_Face(face);
FT_Done_Freetype(ft);
As previously stated, if I loop over the rows and width of the buffer and print the values they look good. I think the call to glTexImage2D is what I want, considering each pixel is a binary intensity value.
The code to draw the symbol is below, but it doesn't seem to work. Instead it just draws a rectangle:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture);
glColor3f(1,1,1);
GLfloat x = 0;
GLfloat y = 0;
GLfloat w = 5;
GLfloat h = 5;
glBegin(GL_QUADS);
glTexCoord2f(0,0);
glVertex2f(x,y);
glTexCoord2f(1,0);
glVertex2f(x+w,y);
glTexCoord2f(1,1);
glVertex2f(x+w,y+h);
glTexCoord2f(0,1);
glVertex2f(x,y+h);
glEnd();
glBindTexture(GL_TEXTURE_2D, 0);
glDisable(GL_TEXTURE_2D);
glDisable(GL_BLEND);
At this point I'm at a loss. I've read the OpenGL documentation and I don't see anything obviously wrong. Can someone tell me what I'm missing?
I found the solution after a lot of trial and error. OpenGL 1.4 requires textures to be a power of two. I allocated a temporary array and used that. Also the GL_INTENSITY setting didn't seem to work so I manually made a RGBA array.
I'm trying to generate a 3x3 texture from an byte array.
My problem is i need to give one extra color in the final of each row to get what I want, and i want to know why.
Is there a problem in the byte count that I'm ignoring or something like that?
Here's my array:
GLubyte data[33] = {
//bottom row
0, 0, 150,
150, 10, 220,
150, 150, 56,
0, 0, 0, //must be here to get what I want but don't render
//middle row
150, 150, 150,
150, 0, 21,
0, 255, 0,
0, 0, 0, //must be here to get what I want but don't render
//top row
250, 150, 0,
250, 0, 150,
0, 150, 0 };
And my texture generation:
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
//I thing the problem is right here
glTexImage2D(GL_TEXTURE_2D, 0, 3, 3, 3, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
return texture;
And I'm applying the texture like this:
GLuint texture = genTexture(0);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture);
glBegin(GL_QUADS);
glTexCoord2f(1.0f, 0.0f); glVertex2f(0.5f, -0.5f);
glTexCoord2f(1.0f, 1.0f); glVertex2f(0.5f, 0.5f);
glTexCoord2f(0.0f, 1.0f); glVertex2f(-0.5f, 0.5f);
glTexCoord2f(0.0f, 0.0f); glVertex2f(-0.5f, -0.5f);
glEnd();
And the result is ignoring these 24 bits for each row:
I would investigate your environment's value for the GL_UNPACK_ALIGNMENT pixel-store parameter. The manual says:
GL_UNPACK_ALIGNMENT
Specifies the alignment requirements for the start of each pixel row in memory. The allowable values are 1 (byte-alignment), 2 (rows aligned to even-numbered bytes), 4 (word-alignment), and 8 (rows start on double-word boundaries).
That manual page says that the "initial value" is, indeed, 4.
Use a suitable glGetInteger() call to read out the value.
You need to set the GL_UNPACK_ALIGNMENT pixel storage parameter to make this work:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
This controls the alignment of rows in the data passed to glTexImage2D(). The default values is 4. In your case, since each row consists of 9 bytes of data (3 colors with 3 bytes each), an alignment value of 4 would round up the row size to 12. This means that 12 bytes will be read per row, and explains why it works when you add an extra 3 bytes at the end of each row.
With the alignment set to 1, you will not need extra padding.
I'm having problem rendering a texture in OpenGL. I'm getting a white color instead of the texture image but my rectangle and the whole context is working fine. I'm using the stbi image loading library and OpenGL 3.3.
Before rendering:
glViewport(0, 0, ScreenWidth, ScreenHeight);
glClearColor(0.2f, 0.0f, 0.0f, 1.0f);
int Buffer;
glGenBuffers(1, &Buffer);
glBindBuffer(GL_ARRAY_BUFFER, Buffer);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
Loading the Texture:
unsigned char *ImageData = stbi_load(file_name, 1024, 1024, 0, 4); //not sure about the "0" parameter
int Texture;
glGenTextures(1, &Texture);
glBindTexture(GL_TEXTURE_2D, Texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, x, y, 0, GL_RGB, GL_UNSIGNED_BYTE, ImageData);
stbi_image_free(ImageData);
Rendering:
glClear(GL_COLOR_BUFFER_BIT);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, Texture);
glEnableVertexAttribArray(0);
glDrawArrays(GL_TRIANGLES, 0, 6 * 3); //a rectangle which is rendered fine
glDisableVertexAttribArray(0);
I am looking for a very minimalistic and simple solution.
First of all, take a second to look at the function declaration for stbi_load (...):
unsigned char *stbi_load (char const *filename,
int *x, // POINTER!!
int *y, // POINTER!!
int *comp, // POINTER!!
int req_comp)
Now, consider the arguments you have passed it:
file_name (presumably a pointer to a C string)
1024 (definitely NOT a pointer)
1024 (definitely NOT a pointer)
0 (definitely NOT a pointer)
The whole point of passing x, y and comp as pointers is so that stbi can update their values after it loads the image. This is the C equivalent of passing by reference in a language like C++ or Java, but you have opted instead to pass integer constants that the compiler then treats as addresses.
Because of the way you call this function, stbi attempts to store the width of the image at the address: 1024, the height of the image at address: 1024 and the number of components at address: 0. You are actually lucky that this does not cause an access violation or other bizarre behavior at run-time, storing things at arbitrary addresses is dangerous.
Instead, consider this:
GLuint ImageX,
ImageY,
ImageComponents;
GLubyte *ImageData = stbi_load(file_name, &ImageX, &ImageY, &ImageComponents, 4);
[...]
glTexImage2D (GL_TEXTURE_2D, 0, GL_RGBA8,
ImageX, ImageY, 0, GL_RGBA, GL_UNSIGNED_BYTE, ImageData);
// ~~~~ You are requesting 4 components per-pixel from stbi, so this is RGBA!
It's been a while since I've done OpenGL but aside from a few arbitrary calls, the main issue I'd guess is that you're not calling glEnable(GL_TEXTURE_2D);before you make the call to draw your function. Afterwards it is wise to call glDisable(GL_TEXTURE_2D);
as well to avoid potential future conflicts.
I'm doing shadow mapping in OpenGL - as such I've created a frame buffer object where I render the depth of the scene from the view of a light.
glBindRenderbuffer(GL_RENDERBUFFER, color_buffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, color_buffer);
glBindRenderbuffer(GL_RENDERBUFFER, depth_buffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depth_buffer);
glGenTextures(1, &color_texture);
glBindTexture(GL_TEXTURE_2D, color_texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, color_texture, 0);
glGenTextures(1, &depth_texture);
glBindTexture(GL_TEXTURE_2D, depth_texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, width, height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depth_texture, 0);
This then renders the scene from the light perspective as normal, all as you would expect. The only addition is I'm using a custom shader to render the depth of the scene also to the "color_texture".
varying vec4 screen_position;
void main()
{
screen_position = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_Position = screen_position;
}
--------------
varying vec4 screen_position;
void main()
{
float depth = screen_position.z / screen_position.w;
gl_FragColor = vec4(depth, depth, depth, 1.0);
}
I can write these two textures "color_texture" and "depth_texture" to the screen using a full screen quad. Both are indeed depth maps and look correct. Now for the weird bit.
When it comes to actually rendering the shadows on objects sampling the "color_texture" depth works fine, but when I switch to sampling "depth_texture", the depth is different by some scale and some constant.
When I added some fudge factor numbers to this sampled depth I could sort of get it to work, but it was really difficult and it just felt horrible.
I really can't tell what is wrong, technically the two textures should be identical when sampled. I can't carry on using "color_texture" due to the accuracy of RGB. I really need to switch but I can't for the life of me work out why the depth texture gives a different value.
I've programmed shadow mapping several times before and it isn't a concept that is new to me.
Can anyone shed any light on this?
There are some issues with your shader. First, screen_position is not a screen position; that's the clip-space vertex position. Screen space would be relative to your monitor, and therefore would change if you moved the window around. You never get screen-space positions of anything; OpenGL only goes down to window space (relative to the window's location).
Second, this:
float depth = screen_position.z / screen_position.w;
does not compute depth. It computes the normalized-device coordinate (NDC) space Z position, which ranges from [-1, 1]. Depth is a window-space value, which comes after the NDC space value is transformed with the viewport transform (specified by glViewport and glDepthRange). This transform puts the depth on the [0, 1] range.
Most importantly third, you're doing this all manually; let GLSL do it for you:
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
--------------
void main()
{
gl_FragColor = vec4(gl_FragCoord.zzz, 1.0);
}
See? Much better.
Now:
When it comes to actually rendering the shadows on objects sampling the "color_texture" depth works fine, but when I switch to sampling "depth_texture", the depth is different by some scale and some constant.
Of course it is. That's because you believed that the depth was the NDC Z-value. OpenGL writes the actual window-space Z value as the depth. So you need to work with window space. OpenGL is nice enough to provide some uniforms to your fragment shader to make this easier. You can use those uniforms to transform NDC-space Z values to window space Z values.