I am working on an OpenGL-based UI written in C. I have a hierarchy of controls -- small square 2D boxes inside a scrollable region of a bigger control. Inside the small boxes 3D scenes are drawn. The 3D scene must be scissored within the small 2D boxes and the small 2D boxes must be scissored within the larger region.
glScissor() is an obvious candidate for making sure drawing takes places within the desired region. The problem is that I cannot simultaneously call glScissor() for the large region and then call glScissor() again for subregions inside the larger scissored region.
Example code to demonstrate what I want:
void drawBigBox() {
glScissor( ... ); //set the bounds of the larger region
for each little box {
//these might be anywhere, but they should only be visible if inside the glScissor bounds
drawLittleBoxes();
}
}
void drawLittleBoxes() {
//draw only inside little box -- Whoops! This replaces the big box bounds!
//How can I meet this constraint while retaining the outer scissor region?
glScissor( ... );
draw3dScene();
}
Is there a simple way of doing what I want? Obviously I can also calculate the intersection of the small and large scissor boxes, or I could more carefully clip/project my 3D scene, but that is more complicated than an OpenGL call. Perhaps there is another approach to doing this that I am unaware of.
Thanks!
For this it is probably easier to use the stencil buffer. You can use INCR to increase the stencil buffer values when writing a quad to it, and when drawing you set the test to EQUAL or GEQUAL to only write where the stencil buffer has the highest value, thus it will only write in the region constrained by all current clipping regions. To remove them when drawing an hierarchy use DECR to remove them again.
This way you can also have non rectangular clipping regions.
You could try to rely on glPush/PopAttribs calls to save and restore the scissor state. However, that was removed in 3.1, and the stack for attributes may not be as large as you need. So you should use an explicit stack.
Just make some object that you use to set the current scissor state. Each layer as you go down the call stack pushes its scissor state, which uses an actual stack of scissor rectangles. I have a generalized version of this that also pushes transform matrices, blend parameters, and so forth.
The object either has to be global GUI state, or bundled with some object that you pass to each of the functions.
You can get the coordinates of the current scissor box (either to save them, or to calculate their intersection with another region) using glGet() with the GL_SCISSOR_BOX parameter.
Related
I'm performing some geographical computations in a grid with squares (i.e. regions). I'm using Delphi, but the logic could probably be applied to C++ too. Let me first explain what I want to do.
The following image is a portion of my grid, which is represented by a two-dimensional array Square that denotes the centre point in each square, and the "movement through the layers":
The green square has an X and Y coordinate of 2, so that is Square[2,2]. The actual coordinates are stored in Square[2,2].Latitude and Square[2,2].Longitude as wel as extra information in e.g. Square[2,2].Info that I use for computations.
Now comes the purpose: I need to do some computations on the surrounding areas. How many of the surrounding areas can be called "neighbours", depends on how many "layers" I have defined. In the image above, I used two of these "layers". That means that when starting from the green cell, I go around it once (blue arrows) and then again in the second layer (red arrows).
Now comes the problem: if I would have started in Square[1,1] (green square) instead of Square[2,2] as in the image below, the second layer (in red) would try to access data on the left side and at the bottom that does not exist (i.e. in the "-1" column and row). See the image below. This problem occurs at all borders of course.
I probably can make exceptions with IF-statements for every scenario, but I was wondering if there are common programming "tricks" that can handle such situations where you try to access data does not exist.
For example, I imagine it would be very handy if I can follow the pattern of the arrows depicted in the first image to access all the neighbouring squares every single time, even if there are non-existing squares. So, looking at the first image, after Square[3,0] you'd go to something like Square[3,-1] etc. and then eventually come back into the "feasible" zone in Square[0,3].
To visit neighborhood, you can use some kind of BFS (breadth-first search).
But for sparse structure (like the last picture shows) it is worth to use some data structure to organize cells in a good way. Perhaps kd-tree is suitable - you add all existing cells in the tree and make range search around given cell to get other cells in its vicinity.
Also look at another spatial data structures (see list at the bottom of kd-tree page).
I am writing an OpenGL program in C that implements alpha-transparent bill boarding particles that use a PNG (with transparency) as their texture via pnglib. However, I have discovered that a particle's transparent zones will still replace particles called before it that are also behind it. In other words, particles newly called, though transparent in some areas, are completely overlapping some particles called before them when, instead, those previously called particles should be showing through the transparency.
In order to visualize the effect this is having, I am attaching a few images to display the problem:
Initially I am calling the particles from oldest-to-newest:
However when the view is changed, the overlapping effect is apparent:
When I decide to reverse the call order I get the opposite:
I believe that a solution to this would involve calling the particles in order from farthest from the camera to nearest. However, it is pretty computationally heavy to go through each active particle, order them from furthest-to-nearest, and then call each one every display frame. I am hoping to find an easier, more efficient solution. I've already tried my hand with glBlendFunc() but no sfactor or dfactor seems to work.
Draw all non transparent geometry first. Then, before drawing the particles, disable the depth-buffer writes by calling glDepthMask (GL_FALSE)
That will fix most of the rendering problems.
Sorting the particles by distance from the camera is still a good idea. With todays CPU power that shouldn't be much of a problem.
I want to draw a tilemap in a (ANSI C, C99 cannot be used due to windows compatibility) game that uses GL for accelerated graphics, although the game is a top-down 2D perspective using textured quads.
The popular opinion for handling a timemap seems to use a GL vertex buffer object, which I am about to write. However, I realized I want some tiles to go a little beyond vertical bounds, faking a slanted aerial view. That will make whatever is directly above the block to be partially covered by the tile.
If I use a VBO here, I will need to draw the entire tilemap in one sitting. Meaning that any object I draw afterwards will be directly on top of the tilemap.
What would be the sanest approach to this problem? Should I draw the tilemap first, then the entities (players/enemies) and then the excess vertical space so they cover the entities, and finally the effects that display over both? (such as shots, explosions, etcetera). But this would give me the issue of shots not being covered by terrain, and if I change the order, terrain covering large explosions awkwardly.
Alternatively I can sort all visual objects and draw them in a top-down fashion, but that would mean I need to change textures often, as sorting by texture wouldn't help too much in this specific case.
As well, I want to be able to modify the colors of every individual vertex in the grid in a dynamic way, so that entities can cast colors into the map. From what I am understanding, the way to achieve this would be with a vertex shader. Is this correct?
EDIT: A last thing. If I draw a VBO like that tilemap that is larger than the screen,by translating, does GL automatically cull out-of-view faces or do I need to reform the VBO every time I move the "camera"?
A VBO is just a piece of abstract memory reserved in the graphics memory. You can place data in any layout and arrangement as you like. You can use a single VBO to store several independent meshes. gl{Vertex,Normal,TexCoord,Color,Attrib}Pointer functions are used to set the offset into memory, that means either process address space or offset into the bound VBO.
Furthermore once can easily draw only subsets of the bound data with either glDrawArrays and glDrawElements by choosing approriate first element or indices in the index buffer.
So, no, you don't have to draw entire VBOs.
I actually answered my own question. I needed to separate the map in two: blocks that have empty space directly on top, and then the rest. Effects will be drawn in two passes, "regular" and "top" "layer"
I feel pretty bad about having an useless question lying around though, so if some admin needs to purge it, please go ahead.
Hey, I'm working on a map editor for my game, and I'm trying to convert the mouse position to a position in the game world, the view is set up using gluPerspective
A good place to start would be the function gluUnProject, which takes mouse coordinates and calculates object space coordinates. Take a look at http://nehe.gamedev.net/data/articles/article.asp?article=13 for a basic tutorial.
UPDATE:
You must enable depth buffering for the code in that article to work. The Z value for mouse coordinates is determined based on the value in the depth buffer at that point.
In your initialization code, make sure you do the following:
glEnable(GL_DEPTH);
A point on the screen represents an entire line (an infinite set of points) in 3D space.
Most people with questions similar to yours are really trying to select an object by clicking on it. If that's what you're after, OpenGL offers a selection mode that's generally more effective than trying to convert the screen coordinate into real-world coordinates.
Using selection mode is (usually) pretty simple: you start with gluPickMatrix, which you use to specify a small box around the click point. You then draw your scene in selection mode. When you're done, instead of actually drawing anything, it gives you back records of what would have been drawn in the box you specified. If memory serves, those are arranged in Z order, so the first one in the list is what would have displayed front-most (i.e., the one you usually want).
In a 3D scene we often need to apply labels (little textelements or icons) next to 3D object that is moving around (rotation, translation) in the scene. These labels should always face the camera but still move with the object. This technique I believe is called billboard.
An additional cool feature would be if the label would stay always at the same size - no matter how far away the associated object is. So the label seems to live in 2D screenspace and not in the 3D scenegraph.
Does anyone figures out a clever way how to do this in WPF?
For billboarding you need to make sure that the face normal is pointing towards the camera. The algorithm is that the dot product between the face normal and the view direction should be -1 (minus one).
I have some old C code that does this, but it's probably not particularly useful.
For keeping the object the same size you'd need to work out the screen size and then apply a transform to keep it the constant size you desired.
However, if you want the object to appear as though it's in 2D space, why not draw it in a 2D overlay? This will solve both the billboarding and scaling problem at the same time. You work out the screen location of your label and then use the 2D drawing functions.