How do we dectect edges of image with same background? - c

I am trying to find the Contours using
cvFindContours( gray, storage, &contour, sizeof(CvContour), CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE, cvPoint(0, 0));
and Canny but i am not able to dectect it.
Is there any way i can detect it.( all the processing is done in c)

First equalize image for better contrast. It will increase noise and other textures also but object will be quite visible.
Now, find gradient, it will evade most of gradual color and noise and some texture.
Now you can experiment with some thresholding, it will give good edge and region around shadow and object both.
To remove noise and texture, whole image can be median blurred with large kernel and then thresholded to get interesting region mask.
You can try multiple iteration of above method also. It lots of accuracy is not required, try blurring out texture.

Related

CN1: Gradient with alpha channel

I would like to have a gradient which goes from black to transparent (not white). How can I achieve this?
From my attempt below I assume the gradient style color's alpha value is not considered:
gui_Footer.allStyles.apply {
backgroundType = Style.BACKGROUND_GRADIENT_LINEAR_VERTICAL
border = RoundRectBorder.create().topOnlyMode(true).cornerRadius(1f)
backgroundGradientEndColor = ColorUtil.BLACK
backgroundGradientStartColor = ColorUtil.argb(0, 255, 255, 255)
}
Gradients in Codename One ignore the alpha byte. While we could technically add support for alpha gradients it's not something that's planned at this time. You can probably generate such an image by manipulating the RGB data but it would be more efficient to just generate an RGB image of a gradient and draw it scaled.
Notice that this is generally the most efficient approach since the GPU works by drawing textures very efficiently. If an image is a power of 2 (e.g. 256x128 pixels) it can fit perfectly in a texture and it's drawn very fast. Much faster than our builtin gradients.

Frame by frame animation using OpenGL and SDL

I am working in a game project that features a large amout of assets. The character animations are very detailed and that require a lot of frames to happen.
At first, I created large spritesheets containing all the animations for a specific character. It was working well on my PC but when I tested it on an Android tablet, I noticed it ecceeded the maximum texture dimension of its GPU. My solution was to break down the big spritesheet into individual frames (the worst case is 180 frames) and upload them individually to the GPU. Things now seem to be working everywhere I need it to work.
Right now, the largest animation I have been working with is a character with 180 frames with 407x725 pixels of width and height. However, as I couldn't find any orientation on the web regarding how to properly render 2D animations using OpenGL, I would like to ask if there is a problem with this approach. Is there a maximum number of textures that can be uploaded to the GPU? Can I exceed the amout of RAM of the GPU?
The most efficient method for the GPU is to pass the entire sprite sheet to opengl as a single texture, and select which frame you want by adjusting the texture coordinates when you draw. You should also pack the sprites into, ideally, a square texture. Reducing the overall amount of memory used by the GPU is very good for performance esp. on phones and tablets.
You want to avoid if possible frequently changing which texture is bound. Ideally you want to bind a single texture and then render bits and pieces of it to the screen until you don't need it anymore, then bind a different texture and continue.
The reason for this is that the GPU will try hard to optimize the operation of the pipeline it creates to handle the geometry you feed it, and the shaders you select. But when you make big changes to the configuration like changing what texture is bound or what shader is bound, that's necessarily going to be somewhat opaque to optimization. Feeding it more vertices and texture coordinates at a time is better because they basically can all get done in a batch without unloading and reloading resources etc.
However depending what cards you are targetting, you should keep in mind that there may be a maximum of 8192 x 8192 size of textures or something like this. So depending on what assets you have you may be forced to split them up across several textures.

Efficient image translation by (x,y) pixels?

Looking to see if anyone can recommend a computationally efficient method for translating/shifting an image by (x,y) pixels.
Reason being, I have been part successful in implementing the fourier-mellin transform to determine the rotation and translation between image frames. Once the image is unrotated I would like to untranslate the image by the calculated pixel offset (x,y). Allowing me to test the image correlation after rotation and translation.
I would think that a efficient method would be to:
Make a border cv::copyMakeBorder().
Use a ROI e.g. make a new matrix header without copying data.
Good luck

Generate Texture in Silverlight imitate leather

I would like to display textures in different colors pretty much having this texture.
How do I do this in Silverlight?
Thanks!
alt text http://a.imageshack.us/img535/5255/leathertexture.png
Turn your texture into alpha textue. Exact steps will depend on your image manipulation software. After that simply place your texture on top of colored rectangle.
You could make pixel shader for even better result, but that would be an overkill in your case.

opengl invert framebuffer pixels

I was wondering was the best way to invert the color pixels in the frame buffer is. I know it's possible to do with glReadPixels() and glDrawPixels() but the performance hit of those calls is pretty big.
Basically, what I'm trying to do is have an inverted color cross-hair which is always visible no matter what's behind it. For instance, I'd have an arbitrary alpha mask bitmap or texture, have it render without depth test after the scene is drawn, and all the frame buffer pixels under the masked (full alpha) pixels of the texture would be inverted.
I've been trying to do this with a texture, but I'm getting some strange results, also all the blending options I still find confusing.
Give something like this a try:
glEnable(GL_COLOR_LOGIC_OP);
glLogicOp(GL_XOR);
// render geometry
glDisable(GL_COLOR_LOGIC_OP);
how about:
glEnable (GL_BLEND);
glBlend (GL_ONE_MINUS_DST_COLOR, GL_ZERO);

Resources