Dividing an image into Irregular Regions - c

I have a face image of 800*600. I want to divide it into different non-rectangular regions like one region for left eye, one for right eye, and so on.
I basically want to design a code - "Given (x,y) coordinate, in which region it lie."
How to do this??

You can use OpenCV. It has simple functions for eye detecting and it's free.
Read these material:
http://opencv-code.com/tutorials/eye-detection-and-tracking/
http://www.codeproject.com/Articles/23191/Face-and-Eyes-Detection-Using-OpenCV
Read this pdf

Related

SDL Relative Position

I have a theoretical question about SDL' Surface cursor.
If I want to display surface_A on my screen I'll use a cursor created with SDL_Rect cursor; and I'll use it with SDL_BlitSurface();.
The cursor will contain a position relative to the top-left corner of my window.
But if I want to display surface_B inside surface_A, do I have to indicate a cursor relative the top-left corner of my window or the top-left corner of surface_A ?
You may be making some wrong assumptions about the relative positions of your cursors. There is a very good, and detailed set of tutorials at the linked location that may clear things up for you...
From HERE...
Using the first tutorial as our base, we'll delve more into the world
of SDL surfaces. As I attempted to explain in the last lesson, SDL
Surfaces are basically images stored in memory. Imagine we have a
blank 320x240 pixel surface. Illustrating the SDL coordinate system,
we have something like this:
This coordinate system is quite different than the normal one you are
familiar with. Notice how the Y coordinate increases going down, and
the X coordinate increases going right. Understanding the SDL
coordinate system is important in order to properly draw images on the
screen.
Some additional terms that may help clarify:
SDL Window : You can think of this as physical pixels, or your monitor.
SDL Renderer : Controls the properties/settings of what is created in that window.

Simple Flat Plane Tessellation Shader

Part 1:
So I want to create a basic tessellation program that takes a plane of quads and transforms it into a more, well, detailed/tessellated plane of quads. Such as the picture below. How much it gets tessellated would depend on user controls, passed in by a uniform (initially). However I am so new to tessellation shaders that I can't even figure out how to do this.
How is this typically done? Surely you shouldn't actually draw the plane of quads prior to the shader program, since from my understanding quads won't get tessellated this way, instead the get tessellated into a way like the below picture:
I believe the answer could to be to draw a plane of points, and these points are then tessellated into more points, and these points are transformed into quads of the appropriate size in the geometry shader I think? Alternatively, instead of converting points into quads could I just draw quads between each four closest points (that would be much better)? Examples very much appreciated!
NOTE: Using GLSL > 4.0 & C only (No C++/Python)
Part 2:
After I get part 1 working, how would I make it so that certain quads are more tessellated than others, such as this?:
I want the parts closer to the camera to be more tessellated.
Part 3:
If I were able to get that far, the next part would be to alter the z-axis of points to make the plane into an interesting environment. This would be done by reading in a 2Dsampler, I know how to do that and all. However, if I am correct in Part 1 about using a plane of points then I need to do more than just alter the points that are converted into quads, because quads need to be sharing vertices essentially in order there to be no gaps between quads. How would that be done? Alternatively if we draw quads between points, with each point being the appriate height, then this wouldn't be a issue.
Part 1
Yes you're correct: generate a 'patch' as a simple grid of points, specify the tesselation levels as uniforms into the TCS (tesselation control shader) and generate the vertex data in the TES (tesselation evaluation shader).
Sounds complicated? Here's a nice tutorial I based my work on: http://antongerdelan.net/opengl/tessellation.html
Part 2
What you are talking about here is LOD (or level of detail). You would need to tesselate and render the higher polygon-count bottom-left corner of your mesh as a separate object.
Your suggested approach is correct: break the overall scene into 'chunks' and determine the LOD (i.e. the tesselation parameters) for each chunk separately, usually by some distance-to-camera algorithm.
Part 3
Another excellent tutorial which does exactly what you are after I believe: http://codeflow.org/entries/2010/nov/07/opengl-4-tessellation/
I used this approach to get a very highly detailed but memory and frame efficient terrain.
Hope this helps.

OpenGL: How to drag image and move it to the line by using the mouse

I want to drag an image to one line by using the mouse and when the image is close to the line, the image will automatically move on to the line, like some "floor planner" program ------------you create wall and drag the door to this wall and when the door is close to the wall, the door will automatically show up on the wall.
Can OpenGL do it?
if it can, can anyone tell me how? If it can not, can anyone tell me how I can do it?
Show me an example.
OpenGL is a rendering API, it's purpose is to generate rasterized images based on descriptions provided to it by an application.
It knows nothing about user input, and even less about the application's "domain objects" such as doors, walls, and so on. All it deals with is abstract coordinates and matrices that describe the transforms and projections to take those 3D coordinates into 2D for rasterization, as well as shading for surfaces and so on.
So, it's up to you to implement that, so that the coordinates you eventually pass to OpenGL end up being what you want them to be.
Snapping is typically a combination of measuring the distance to some guiding object, and the following quantization of the input coordinates to correspond to the the guide.

Blob detection in C (not with OPENCV)

I am trying to do my own blob detection who will receive a real time video, and try to detect a white paper sheet.
Even if is something written inside the paper. I need to detect the paper and is corner, because what i really want is to draw a opengl polygon over the paper in each corner of the paper will be a corner of the polygon. Then i need the coordinates of the paper to do other stuffs.
So i need to:
- detect a square white blob.
- get the coordinates of the cornes
- draw a polygon over the white sheet.
Any ideias how can i do that?
Much depends on context. For example, suppose that you:
know that the paper is always roughly centered (i.e. W/2, Y/2 is always inside the blob), and no more rotated than 45 degrees (30 would be better)
have a suitable border around the sheet so that the corners never touch the edges of the FOV
are able (through analysis of local variance, or if you're lucky, check of background color or luminance) to say whether a point is inside or outside the blob
the inside/outside function never fails (except possibly in the close vicinity of a border)
then you could walk a line from a point on the border (surely outside) and the center (surely inside), even through bisection, and find a point - an areal - on the edge.
Two edge points give a rect (two areals give a beam), two rects give an intersection (two beams give a larger areal) - and there's your corner. You should carry along the detection uncertainty (areal radius) in order to validate corners (another less elegant approach is to roughly calculate where the corner is, and pinpoint it with a spiral search or drunkard's walk).
This algorithm is amenable to parallelization and, as long as the hypotheses hold, should be really fast.
All that said, it remains a hack -- I agree with unwind, why reinvent the wheel? If you have memory or CPU constraints (embedded systems, etc.), I believe there ought to be OpenCV and e-Vision "lite" ports also for ARM and embedded platforms.
(Sorry for my terminology - I'm monkey-translating from Italian. "Areal" is likely to correspond to your "blob", a beam is the family of lines joining all couples of points in two different blobs, line intensity being the product of distance from a point from its areal's center)
I am trying to do my own blob detection who will receive a real time video, and try to detect a white paper sheet.
Your first shot could be a simple flood-fill. That is, select a good threshold to binarize the image and apply the algorithm. The threshold can be fixed if you know the paper is always brighter than X and the background is always darker than this. Or this can be an adaptive threshold, for example Otsu's method. OpenCV offers this for free.
If you'd need to speed it up you could use a union-find data structure.
Finally you'd need to come up with some heuristic how to identify the corners (e.g. the four extreme values in x/y direction).
Then i need [...] the coordinates of the cornes [...]
Then you don't need blob detection, but corner detection or contour detection in the first place. OpenCV has some nice functionality for exactly this.
If you can't use it, I would suggest to binarize the image as above and use a harris-detector to find the corners of the object.
OpenCV's TBB support could also come quite handy if you'd use it and you have problems to meet your real-time requirements.

Mixing layers in OpenCV

i need to make a program where i have to detect the edge of a subimage (like a face in a portrait) using canny detector. then i need to filter that portion out and paste it in another background. it is like mixing 2 layers. can anybody give me any algorithm for this? or any idea about the process?
You are probably aware that the task of selecting a subimage is most known Region of Interest (ROI).
Edge detection with canny shouldn't be a problem since OpenCV implements it as cvCanny().
For what I understand you want to overlap two images. I suppose you want to add one image on top of each other? Take a look at step 2 on the first link I suggest: Adding Two Images with Different Size
If you want to BLEND them, then check these instructions. I have used them before to draw over the webcam window.

Resources