I was wondering was the best way to invert the color pixels in the frame buffer is. I know it's possible to do with glReadPixels() and glDrawPixels() but the performance hit of those calls is pretty big.
Basically, what I'm trying to do is have an inverted color cross-hair which is always visible no matter what's behind it. For instance, I'd have an arbitrary alpha mask bitmap or texture, have it render without depth test after the scene is drawn, and all the frame buffer pixels under the masked (full alpha) pixels of the texture would be inverted.
I've been trying to do this with a texture, but I'm getting some strange results, also all the blending options I still find confusing.
Give something like this a try:
glEnable(GL_COLOR_LOGIC_OP);
glLogicOp(GL_XOR);
// render geometry
glDisable(GL_COLOR_LOGIC_OP);
how about:
glEnable (GL_BLEND);
glBlend (GL_ONE_MINUS_DST_COLOR, GL_ZERO);
Related
In my 2D map application, I have 16-bit heightmap textures containing altitudes in meters associated to a point on the map.
When I draw these textures on the screen, I would like to display an analysis such that the pixel referring to the highest altitude on the screen is white, the pixel referring to the lowest altitude in the screen is black and the values in-between are interpolated between those two.
I'm using an older OpenGL version and thus do not have access to modern pipeline functionality like GLSL or PBO (Which somehow can make getting color buffer contents to CPU side much more efficient than glReadPixels, as I've heard).
I have access to ATI_fragment_shader extension which makes possible to use a basic fragment shader to merge R and G channels in these textures and get a single float grayscale luminance value.
Then I would've been able to re-color these pixels again inside shader (Map them to 0-1 range) based on maximum and minimum pixel luminance values but I don't know what they are.
My question is, between the pixels currently on the screen, how do I find the pixels with maximum and minimum luminance values? Or as an alternative, how do I find these values inside a texture? (Because I could make a glCopyTexImage2D call after drawing the texture with grayscale luminance values on the screen and retrieve the data as a texture).
Stuff I've tried or read about so far:
-If I could somehow get current pixel RGB values in the color buffer to CPU side, I could find what I need manually and then use them. However, reading color buffer contents with glReadPixels is unacceptably slow. It's no use even if I set it up so that it completes one read operation over multiple frames.
-Downsampling the texture to 1x1 size until the last standing pixel is either minimum or maximum value and then using this 1x1 texture inside shader. I have no idea how to achieve this without GLSL and texel fetching support since I would have to look up the pixel which is to the right, up and up-right of the current one and find a min/max value between them.
From my own trial-and-error experience, it seems that DirectX pixel shaders only run for pixels/fragments that are within the bounds of some geometric primitive rendered by DirectX, and are not run for pixels of the frame that are simply the clear-color.
MSDN says:
Pixel shaders work in concert with vertex shaders; the output of a vertex shader provides the inputs for a pixel shader.
This stands in contrast to WPF pixel shaders, which are run for every pixel of the frame, because WPF doesn't render 3D primitives and therefore doesn't know or care what it means to be a geometric primitive pixel or clear-color pixel.
So for the following image, a DirectX pixel shader would only be run for the area in white, because it corresponds to a geometric primitive output by the vertex shader, but not for the black area, because that's the clear-color. A WPF pixel shader, on the other hand, would be run for every pixel of the frame, both white and black.
Is this understanding correct?
Your understanding is mostly correct - pixel shader invocations are triggered by drawing primitives (e.g. triangles). In fact, a pixel in the window may end up getting more than one pixel shader invocation, if for example a second triangle is drawn on top of the first. This is referred to as overdraw and is generally something to avoid, with the most common method of avoidance being using z-culling.
If you want to trigger a pixel shader for every pixel in the window, simply draw two triangles that make up a "full screen quad", i.e. coordinates (-1,-1) to (1,1). Behind the scenes, this is what WPF essentially does.
I am trying to find the Contours using
cvFindContours( gray, storage, &contour, sizeof(CvContour), CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE, cvPoint(0, 0));
and Canny but i am not able to dectect it.
Is there any way i can detect it.( all the processing is done in c)
First equalize image for better contrast. It will increase noise and other textures also but object will be quite visible.
Now, find gradient, it will evade most of gradual color and noise and some texture.
Now you can experiment with some thresholding, it will give good edge and region around shadow and object both.
To remove noise and texture, whole image can be median blurred with large kernel and then thresholded to get interesting region mask.
You can try multiple iteration of above method also. It lots of accuracy is not required, try blurring out texture.
I need to move surfaces around the screen based on certain horizontal and vertical velocities. I need those velocities to be completely random. My idea was to generate random float numbers (in which I succeeded) and use them as the velocities. This way I could have many different velocities, never being too fast or too slow. The problem is: SDL_BlitSurface will only accept a SDL_Rect as the parameter to determine the new rect with which the surface will be drawn, and SDL_Rect is a struct made of 4 ints: two for coordinates and two for dimensions;
Resuming: How to work with precision when blitting surfaces on SDL?
There is a solution to actually display a surface with a half-pixel precision. There is a performance cost but it renders quite nicely. This is basically how old-school anti-aliasing works: rendering at a higher resolution then downscaling.
win = SDL_CreateWindow("", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, width, height, 0); //create a new rendering window
renderer = SDL_CreateRenderer(win, -1, SDL_RENDERER_ACCELERATED); //create a new renderer
SDL_SetHint(SDL_HINT_RENDER_SCALE_QUALITY, "linear"); // make the scaled rendering look smoother.
SDL_RenderSetLogicalSize(renderer, width*2, height*2); //tell the renderer to work internally with this size, it will be scaled back when displayed on your screen
You can find some explanation about the functions here or on the API page here and here.
Now you can use your window as if it was twice bigger but it still outputs at the same size. When you're doing your output, you put the same SDL_Rect in the blitting functions except everything is doubled. That way you can keep half pixel precision. You can get even cleaner output if your sprites have also the increased precision.
You could also do it with quarter pixel precision but the performance cost will be even bigger so it might not be possible in your application. There is also a memory cost because the surfaces are bigger (times four for half pixel precision, sixteen for quarter pixel).
SDL_BlitSurface is working with pixels, and unfortunately you cannot have "half pixels". You should still represent your objects' coordinates as float, but convert them to int when passing them to your SDL_Rect. Since your SDL_Surfaces will always land perfectly on screen pixels, your graphics should always remain crisp.
That being said, to have more precision, I guess you could switch to rendering quads in OpenGL. The hardware will be responsible for calculating the pixel color of your textures when they are not properly aligned with screen pixels, resulting in "fuzzy" textures, but at least you will have total control of their position.
It does not make sense to deal with sub-pixel precision when rendering, as pixels by definition are the smallest addressable element in raster graphics.
You could instead round the position to the nearest integer when rendering, e.g. (Sint16)(x+0.5). The rest of your application can still use coordinates with higher precision if you need it.
Similar to calibrating a single camera 2D image with a chessboard, I wish to determine the width/height of the chessboard (or of a single square) in pixels.
I have a camera aimed vertically at the ground, ensured to be perfectly level with the surface below. I am using the camera to determine the translation between consequtive frames (successfully achieved using fourier phase correlation), at the moment my result returns the translation in pixels, however I would like to use techniques similar to calibration, where I move the camera over the chessboard which is flat on the ground, to automatically determine the size of the chessboard in pixels, relative to my image height and width.
Knowing the size of the chessboard in millimetres, I can then convert a pixel unit to a real-world-unit in millimetres, ie, 1 pixel will represent a distance proportional to the height of the camera above the ground. This will allow me to convert a translation in pixels to a translation in millimetres, recalibrating every time I change the height of the camera.
What would be the recommended way of achieving this? Surely it must be simpler than single camera 2D calibration.
OpenCV can give you the position of the chessboard's corners with cv::findChessboardCorners().
I'm not sure if the perspective distortion will affect your calculations, but if the chessboard is perfectly aligned beneath the camera, it should work.
This is just an idea so don't hit me.. but maybe using the natural contrast of the chessboard?
"At some point it will switch from bright to dark pixels and that should happen (can't remember number of columns on chessboard) times." should be a doable algorithm.