SDL Relative Position - c

I have a theoretical question about SDL' Surface cursor.
If I want to display surface_A on my screen I'll use a cursor created with SDL_Rect cursor; and I'll use it with SDL_BlitSurface();.
The cursor will contain a position relative to the top-left corner of my window.
But if I want to display surface_B inside surface_A, do I have to indicate a cursor relative the top-left corner of my window or the top-left corner of surface_A ?

You may be making some wrong assumptions about the relative positions of your cursors. There is a very good, and detailed set of tutorials at the linked location that may clear things up for you...
From HERE...
Using the first tutorial as our base, we'll delve more into the world
of SDL surfaces. As I attempted to explain in the last lesson, SDL
Surfaces are basically images stored in memory. Imagine we have a
blank 320x240 pixel surface. Illustrating the SDL coordinate system,
we have something like this:
This coordinate system is quite different than the normal one you are
familiar with. Notice how the Y coordinate increases going down, and
the X coordinate increases going right. Understanding the SDL
coordinate system is important in order to properly draw images on the
screen.
Some additional terms that may help clarify:
SDL Window : You can think of this as physical pixels, or your monitor.
SDL Renderer : Controls the properties/settings of what is created in that window.

Related

Dividing an image into Irregular Regions

I have a face image of 800*600. I want to divide it into different non-rectangular regions like one region for left eye, one for right eye, and so on.
I basically want to design a code - "Given (x,y) coordinate, in which region it lie."
How to do this??
You can use OpenCV. It has simple functions for eye detecting and it's free.
Read these material:
http://opencv-code.com/tutorials/eye-detection-and-tracking/
http://www.codeproject.com/Articles/23191/Face-and-Eyes-Detection-Using-OpenCV
Read this pdf

How to display the tiny triangles or recognize them quickly?

What I am doing is a pick program. There are many triangles and I want select the front and visible ones by a rectangular region. The main method is described below.
there are a lot of triangles and each triangle has its own color.
draw all the triangles to a frame buffer.
read the color of pixel in frame buffer and based on the color, we know which triangles are selected.
The problem is that there are some tiny triangles can not be displayed in the final frame buffer. Just like the green triangle in the picture. I think the triangle is too tiny and ignored by the graphic card.
My question is how to display the tiny triangles in the final frame buffer? or how to know which triangles are ignored by the graphic card?
Triangles are not skipped based on their size, but if a pixel center does not fall inside or lie on the top or left edge (this is referred to as coverage testing) they do not generate any fragments during rasterization.
That does mean that certain really small triangles are never rasterized, but it is not entirely because of their size, just that their position is such that they do not satisfy pixel coverage.
Take a moment to examine the following diagram from the DirectX API documentation. Because of the size and position of the the triangle I have circled in red, this triangle does not satisfy coverage for any pixels (I have illustrated the left edge of the triangle in green) and thus never shows up on screen despite having a tangible surface area.
If the triangle highlighted were moved about a half-pixel in any direction it would cover at least one pixel. You still would not know it was a triangle, because it would show up as a single pixel, but it would at least be pickable.
Solving this problem will require you to ditch color picking altogether. Multisample rasterization can fix the coverage issue for small triangles, but it will compute pixel colors as the average of all samples and that will break color picking.
Your only viable solution is to do point inside triangle testing instead of relying on rasterization. In fact, the typical alternative to color picking is to cast a ray from your eye position through the far clipping plane and test for intersection against all objects in the scene.
The usability aspect of what you seem to be doing seems somewhat questionable to me. I doubt that most users would expect a triangle to be pickable if it's so small that they can't even see it. The most obvious solution is that you let the user zoom in if they really need to selectively pick such small details.
On the part that can actually be answered on a technical level: To find out if triangles produced any visible pixels/fragments/samples, you can use queries. If you want to count the pixels for n "objects" (which can be triangles), you would first generate the necessary query object names:
GLuint queryIds[n]; // probably dynamically allocated in real code
glGenQueries(n, queryIds);
Then bracket the rendering of each object with glBeginQuery()/glEndQuery():
loop over objects
glBeginQuery(GL_SAMPLES_PASSED, queryIds[i]);
// draw object
glEndQuery(GL_SAMPLES_PASSED);
Then at the end, you can get all the results:
loop over objects
GLint pixelCount = 0;
glGetQueryObjectiv(queryIds[i], GL_QUERY_RESULT, &pixelCount);
if (pixelCount > 0) {
// object produced visible pixels
}
A couple more points to be aware of:
If you only want to know if any pixels were rendered, but don't care how many, you can use GL_ANY_SAMPLES_PASSED instead of GL_SAMPLES_PASSED.
The query counts samples that pass the depth test, as the rendering happens. So there is an order dependency. A triangle could have visible samples when it is rendered, but they could later be hidden by another triangle that is drawn in front of it. If you only want to count the pixels that are actually visible at the end of the rendering, you'll need a two-pass approach.

OpenGL: How to drag image and move it to the line by using the mouse

I want to drag an image to one line by using the mouse and when the image is close to the line, the image will automatically move on to the line, like some "floor planner" program ------------you create wall and drag the door to this wall and when the door is close to the wall, the door will automatically show up on the wall.
Can OpenGL do it?
if it can, can anyone tell me how? If it can not, can anyone tell me how I can do it?
Show me an example.
OpenGL is a rendering API, it's purpose is to generate rasterized images based on descriptions provided to it by an application.
It knows nothing about user input, and even less about the application's "domain objects" such as doors, walls, and so on. All it deals with is abstract coordinates and matrices that describe the transforms and projections to take those 3D coordinates into 2D for rasterization, as well as shading for surfaces and so on.
So, it's up to you to implement that, so that the coordinates you eventually pass to OpenGL end up being what you want them to be.
Snapping is typically a combination of measuring the distance to some guiding object, and the following quantization of the input coordinates to correspond to the the guide.

Zoom far in on an image with Xlib

I have an ximage which I want to zoom in on, and display. I'm currently taking the naive approach:
allocate bigger image
use nearest-neighbor interpolation to fill it in.
put the whole image on a pixmap.
Which works, but slowly, and crawls once I approach bigger zoom levels, like 800%. The gimp, however, can zoom in to 3200% and still feel snappy. What's the approach taken here? Should I only fill one screen at a time? But then what about scrolling: wouldn't performing interpolation, and an XPutImage, and an XCopyArea on each expose kill performance?
I'm not expert in Xlib, but in my opinion a good approach would be to draw only the zoomed part, instead of computing the interpolation of the entire image.
For scrolling, if you are looking for performances, you may copy the part of the old zoom which is still visible in the new position, and compute the interpolation of the "discovered" pixels. For example, when scrolling down, you may copy the bottom of the previous image and paste it higher, and then compute/draw the new visible stuff at the bottom.
Most modern X11 applications don't use Xlib directly much, if at all. My guess would be that Gimp is rendering the zoomed image into a buffer itself and drawing that to the window, rather than working with the image in an XImage.

Converting mouse position to world position OpenGL

Hey, I'm working on a map editor for my game, and I'm trying to convert the mouse position to a position in the game world, the view is set up using gluPerspective
A good place to start would be the function gluUnProject, which takes mouse coordinates and calculates object space coordinates. Take a look at http://nehe.gamedev.net/data/articles/article.asp?article=13 for a basic tutorial.
UPDATE:
You must enable depth buffering for the code in that article to work. The Z value for mouse coordinates is determined based on the value in the depth buffer at that point.
In your initialization code, make sure you do the following:
glEnable(GL_DEPTH);
A point on the screen represents an entire line (an infinite set of points) in 3D space.
Most people with questions similar to yours are really trying to select an object by clicking on it. If that's what you're after, OpenGL offers a selection mode that's generally more effective than trying to convert the screen coordinate into real-world coordinates.
Using selection mode is (usually) pretty simple: you start with gluPickMatrix, which you use to specify a small box around the click point. You then draw your scene in selection mode. When you're done, instead of actually drawing anything, it gives you back records of what would have been drawn in the box you specified. If memory serves, those are arranged in Z order, so the first one in the list is what would have displayed front-most (i.e., the one you usually want).

Resources