I'm loading a cubemap to create a skybox, everything is fine and the skybox renders properly with a correct texture application.
However, I decided to check my program safety with valgrind, Valgrind gives this error: http://pastebin.com/seqmXjyx
The line 53 in sky.c is:
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, GL_RGB, texture.width, texture.height, 0, GL_BGR, GL_UNSIGNED_BYTE, texture.pixels);
protoype:
void glTexImage2D( GLenum target,
GLint level,
GLint internalformat,
GLsizei width,
GLsizei height,
GLint border,
GLenum format,
GLenum type,
const GLvoid *pixels )
The texture width and height are unsigned int (1024x1024), and the pixels have bmp texture format.
It is correctly parsed for sure (as I said before, everything is rendered correctly, openGL returns no error, I only get this invalid write of 4 from valgrind).
(this invalid write appear every time I load a texture)
So I read the man, and it made me even more confused, this is what I get from it:
GL_INVALID_VALUE is generated if width or height is less than 0 or greater than 2 + GL_MAX_TEXTURE_SIZE, or if either cannot be represented as 2^k +2 (border) for some integer value of k.
glGetError() gives me GL_NO_ERROR, when I'm sending 1024*1024 as parameters, which is obviously not (2^k + 2)
I also read about the border parameter, which seems kind of useless for now the openGL I use, but could this be link to this overwrite?
Finally, as I said, everything works properly, but I would definitely like to know where are these invalids writes coming from.
The full project: https://github.com/toss-dev/minetoss
Which man page are you quoting? There are multiple man pages available, not all mapping to the same OpenGL version.
Anyways, the idea behind the + 2 (border) is to have 2 multiplied by the value of border, which is in your case 0. So your code is just fine. border is a feature that is not supported by the latest GL versions and is therefore absent from the more recent man pages.
Now, back to your problem. The valgrind error is coming from within the GeForce GL driver, so unless you get access to the source, it's unlikely you'll get anywhere investigating it (you can try to contact the driver maintainers if you want... but may have a hard time getting any answer).
Related
Reapeating the above: Can a GLSL fragment shader run without a framebuffer and any rasterization stage?
This perfect answer gives an insight about where to start with SSBO's. The answer has a link to OpenGL ARB extension that has a boilerplate code. The code works for me if made with some changes to work with OpenGL compute programs. But, I really does not get it, how to do with a fragment program? And without any other buffers than SSBO.
The code clearly has fragment source code without any pixel operations, only SSBO ones.
in vec4 color;
void main()
{
uint fragmentNumber = atomicCounterIncrement(fragmentCounter);
if (fragmentNumber < maxFragmentCount) {
fragments[fragmentNumber].position = ivec2(gl_FragCoord.xy);
fragments[fragmentNumber].color = color;
}
}
And later in the C program file:
// Generate, bind, and specify the data store for the atomic counter.
glGenBuffers(1, &counterBuffer);
glBindBufferBase(GL_ATOMIC_COUNTER_BUFFER, 0, counterBuffer);
glBufferData(GL_ATOMIC_COUNTER_BUFFER, sizeof(GLuint), NULL,
GL_DYNAMIC_DRAW);
// Reset the atomic counter to zero, then draw stuff. This will record
// values into the shader storage buffer as fragments are generated.
GLuint zero = 0;
glBufferSubData(GL_ATOMIC_COUNTER_BUFFER, 0, sizeof(GLuint), &zero);
glUseProgram(program);
glDrawElements(GL_TRIANGLES, ...);
As per my setup, I do not have any output with the means of OpenGL pixels. I wish it to stay so. Is it possible, or am I missing something?
P.S The above setup gives me error invalid framebuffer operation after glDrawElements immediately followed by glFinish.
Update 21.03.2021
There is a Framebuffers with no attachments. The only thing you should set in its state is its width and height. And that is somewhat at the course that anyone's heading, if one wish to minimize setup.
The minus of the aformentioned, is that it is still requires some geometry to be fed to rasterization stage. To start the shader stages, you know. But, as a plus, one gets geometry rasterization, wish it or not.
If I have time, I leave some code as a reminder for miself.
Can a GLSL fragment shader run without a framebuffer and similar inconveniences?
No. The fragment shaders need the step that invokes them. The stage that produce fragments called rasterization.
From the khronos wiki:
A Fragment Shader is the Shader stage that
will process a Fragment generated by the Rasterization
into a set of colors and a single depth value.
The fragment shader is the OpenGL pipeline stage after a primitive is rasterized.
And the rasterization needs a render step to produce fragments. The rendering is done to somewhere.
In OpenGL, it is done to framebuffer. So without a framebuffer, you can not render, hence OpenGL
can not produce fragments.
Setup of a framebuffer can be minimized by
Framebuffers with no attachments.
But one needs to supply geometry and render it, to invoke fragment shaders.
Fragment shaders can read and write to arbitrary SSBO. But the usage is not similar to compute shaders.
Fragment shaders invoke on each produced fragment, and compute shaders can be invoked, as I may say, arbitrary.
Many thanks to all commenters who had pointed me to the obvious, by now, reason why the fragment shaders need a render operation.
For Variance Shadow Mapping I need a low-res, averaged version of my shadowMap. In DirectX9 I do this by rendering distances to a renderTexture and then generating mipmaps. The mipmap generation should yield results similar to a separable box-filter, but be faster since there's special hardware for it. However, I'm running into two problems:
According to PIX, each next mipmap level contains the top-left quadrant of the previous one (at full resolution), instead of the full image at half resolution.
Framerate is absolutely obliterated when I do this: it's 50x lower than without the mipmap generation.
This is how I initialise the RenderTexture:
D3DXCreateTexture(device, 2048, 2048, 0, D3DUSAGE_RENDERTARGET, D3DFMT_G32R32F, D3DPOOL_DEFAULT, &textureID);
D3DXCreateRenderToSurface(device, 2048, 2048, D3DFMT_G32R32F, TRUE, D3DFMT_D24S8, &renderToSurface);
dxId->GetSurfaceLevel(0, &topSurface);
This is how I generate the mipmaps (this is done every frame after rendering to the shadowMap has finished):
textureID->SetAutoGenFilterType(D3DTEXF_LINEAR);
textureID->GenerateMipSubLevels();
I think setting the filterType shouldn't be done every time, but removing that doesn't make a difference.
Since this is for Variance Shadow Mapping I really need colour format D3DFMT_G32R32F, but I've also tried D3DFMT_A8R8G8B8 to see whether that made a difference, and it didn't: mipmap is still broken in the same way and framerate is still 1fps.
I've also tried using D3DUSAGE_AUTOGENMIPMAP instead, but that didn't generate a mipmap at all according to PIX. That looks like this (and then I don't call GenerateMipSubLevels at all anymore):
D3DXCreateTexture(device, 2048, 2048, 0, D3DUSAGE_RENDERTARGET | D3DUSAGE_AUTOGENMIPMAP, D3DFMT_G32R32F, D3DPOOL_DEFAULT, &textureID);
Note that this is on Windows 10 with a Geforce 550 GPU. I've already tried reinstalling my videocard drivers entirely, just in case, but that didn't make a difference.
How can I get a proper mipmap for a renderTexture in DirectX9, and how can I get proper framerate while doing so?
In this question I'm mostly seeking for advice and guidance on overall understanding of some concepts of drawing wth GTK+ and Cairo in C language (IMO the information on topic is rather scarce, also my experience in really modest).
I'm coding some pet application which captures frames from webcam and displays them on a GTK window.
My app is working, but there are some points which I don't feel like grasped.
Overall process:
I've got a webcam frame as an array of bytes mmaped from webcam device to my app's process memory. So when another frame is captured what I have is a 640*480*3 bytes long array which is denoted as being in a RGB24 format. After some searching it looks like for a purpose of displaying it in a GTK window I need to create an object called drawing area using gtk_drawing_area_new(), add a "draw" callback and do "drawing" there in a designated callback. So, according to Cairo "drawing" is a process of applying "source" to "destination". I assume that I already have a source - my webcam mmaped pixels, but it looks like I need to use some "source" that Cairo is able to understand. I found a candidate:
cairo_surface_t* surface = cairo_image_surface_create(CAIRO_FORMAT_RGB24, 640, 480);
As I see this call creates some Cairo acceptable object, which along the way allocates a buffer in my app's memory which I can get, using:
unsigned char* surface_data = cairo_image_surface_get_data(surface);
According to docs this is a 640x480x4 bytes long buffer, which, on a little endian archs, should be filled with BGRA formatted pixel data.
Then I should rearrange my original webcam pixels for EVERY frame captured using this :
for (size_t idx_src=0, idx_dst=0; idx_src<640*480*3; idx_dst+=4, idx_src+=3) {
surface_data[idx_dst] = image[idx_src+2]; //B [3rd pos -> 1st pos]
surface_data[idx_dst+1] = image[idx_src+1]; //G [no change]
surface_data[idx_dst+2] = image[idx_src]; //R [1st pos -> 3rd pos]
}
After this I should do "drawing" with:
cairo_set_source_surface(cr, surface, 0, 0);
cairo_paint(cr);
So questions:
Is it what is supposed to be done for task at hand or I miss
something completely here ?
What confuses me is that I should
rearrange my original webcam pixels for EVERY frame captured (this
presumably consumes some cpu time, could be a limiting factor for
capturing in HD res at high frame rates). Is there some other way ?
Let's suppose I somehow acquire pixels from webcam in a Cairo
conforming format, e.g. 640x480x4 BGRA formatted bytes. Is there a
way to "wrap" this data in some Cairo acceptable object to exclude
pixel rearranging part ?
Any other thoughts I should've consider ?
Thanks for attention.
For most of your questions: Cairo only supports some image formats. Since your data comes in another format, you will have to convert it. All this copying around will likely be too slow. To make this work with an acceptable speed, you would need some other approach. No, I do not have any helpful suggestions here.
An unhelpful one would be: Is there some example for this webcam that you could look at?
Let's suppose I somehow acquire pixels from webcam in a Cairo conforming format, e.g. 640x480x4 BGRA formatted bytes. Is there a way to "wrap" this data in some Cairo acceptable object to exclude pixel rearranging part ?
Yup. cairo_image_surface_create_for_data.
I am trying to create a image processing software.
I get some weird results trying to create an Unsharp Mask effect.
I will attach my code here and I will explain what it does and where the problems are (or at least , where I think they are):
void unsharpMask(SDL_Surface* inputSurface,SDL_Surface* outputSurface)
{
Uint32* pixels = (Uint32*)inputSurface->pixels;
Uint32* outputPixels=(Uint32*)outputSurface->pixels;
Uint32* blurredPixels=(Uint32*)blurredSurface->pixels;
meanBlur(infoSurface,blurredSurface);
for (int i=0;i<inputSurface->h;i++)
{
for(int j=0;j<inputSurface->w;j++)
{
Uint8 rOriginal,gOriginal,bOriginal;
Uint8 rBlurred,gBlurred,bBlurred;
Uint32 rMask,gMask,bMask;
Uint32 rFinal,gFinal,bFinal;
SDL_GetRGB(blurredPixels[i*blurredSurface->w+j],blurredSurface->format,&rBlurred,&gBlurred,&bBlurred);
SDL_GetRGB(pixels[i*inputSurface->w+j],inputSurface->format,&rOriginal,&gOriginal,&bOriginal);
rMask=rOriginal - rBlurred;
rFinal=rOriginal + rMask;
if(rFinal>255) rFinal=255;
if(rFinal<=0) rFinal=0;
gMask=gOriginal - gBlurred;
gFinal=gOriginal + gMask;
if(gFinal>255) gFinal=255;
if(gFinal<0) gFinal=0;
bMask=bOriginal - bBlurred;
bFinal=bOriginal + bMask;
if(bFinal>255) bFinal=255;
if(bFinal<0) bFinal=0;
Uint32 pixel =SDL_MapRGB(outputSurface->format,rFinal,gFinal,bFinal);
outputPixels[i *outputSurface->w+j]=pixel;
}
}
}
So, as you can see, my function gets 2 arguments: the image source(from which pixel data will be extracted, and a target, where the image will be projected). I blur the original image, then i subtract the RGB value of the blurred image from the source image to get "the mask" and then , i add the mask to the original image and that's it. I added some clamping to make sure everything stays in the correct range and then I draw every pixel resulted on the output surface. All these surfaces have been converted in an SDL_PIXELFORMAT_ARGB8888 . The output surface is loaded into a texture (also SDL_PIXELFORMAT_ARGB8888) and rendered on the screen.
The results are pretty good in 90% of the image, I get the effect I want, however , there are some pixels that look weird in some places.
Original:
Result:
I tried to fix this in any possible way I knew. I thought is a format problem and played with the pixel bit depth , but I couldn't get to a good result. What i found is that all the values > 255 are negative values and I tried to make them completely white. And it works for the skies ,for example, but if you can see on my examples, the dark values, on the grass are also affected, which makes this not a good solution.
I also get this kind of wrong pixels when I want to add a contrast or do sharpen using kernel convolution, and the values are really bright/dark.
In my opinion there may be a problem with the pixel format, but I'm not sure if that's true.
Is there anyone that had this kind of problem before or knows a potential solution?
I want to create a texture system where I add to a texture, not overwrite it. My texture has integer values (32 bit). What I want: Ex. I have an integer pixel with bits 100, I want to add 10 to it so it becomes 110.
My current implementation has two textures, one with the previous texture, and a texture to write on. The previous texture's values are read and then rewritten with the new data. Is there a better method to do so because using two textures feel very inefficient?
Depending on what you mean by "appending", you could use additive blending:
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_ONE, GL_ONE);
then, the routput of your fragment shader will by added to the current contents of the color buffer. If you use a FBO to render into the texture, you can directly add to this texture.
You should just be careful to not create any feedback loops, so your fragment shader's result should not depend on any sample of the very same texture you render to.
UPDATE
As noted in the comment, the texture in question has GL_RED_INTEGER format. Unfortunately, the blending is only applied on floating-point color buffers (including normalized integers), and never on unnormalized integers.
However, there is another potential approach. The rules for the "feedback loops" I mentioned before have been relaxed with recent OpenGL. The extension GL_ARB_texture_barrier explicitely allowes a fragment shader to read pixels from the same texture it is writing to:
Specifically, the values of rendered fragments are undefined if any
shader stage fetches texels and the same texels are written via fragment
shader outputs, even if the reads and writes are not in the same Draw
call, unless any of the following exceptions apply:
The reads and writes are from/to disjoint sets of texels (after
accounting for texture filtering rules).
There is only a single read and write of each texel, and the read is in
the fragment shader invocation that writes the same texel (e.g. using
"texelFetch2D(sampler, ivec2(gl_FragCoord.xy), 0);").
[...]
This extension has been promoted to a core feature of OpenGL 4.5. This is quite new and not available on a lot of platforms, so it is unclear if you can use it...