VBO for tilemap (draw order and slanted aerial 2D) - c

I want to draw a tilemap in a (ANSI C, C99 cannot be used due to windows compatibility) game that uses GL for accelerated graphics, although the game is a top-down 2D perspective using textured quads.
The popular opinion for handling a timemap seems to use a GL vertex buffer object, which I am about to write. However, I realized I want some tiles to go a little beyond vertical bounds, faking a slanted aerial view. That will make whatever is directly above the block to be partially covered by the tile.
If I use a VBO here, I will need to draw the entire tilemap in one sitting. Meaning that any object I draw afterwards will be directly on top of the tilemap.
What would be the sanest approach to this problem? Should I draw the tilemap first, then the entities (players/enemies) and then the excess vertical space so they cover the entities, and finally the effects that display over both? (such as shots, explosions, etcetera). But this would give me the issue of shots not being covered by terrain, and if I change the order, terrain covering large explosions awkwardly.
Alternatively I can sort all visual objects and draw them in a top-down fashion, but that would mean I need to change textures often, as sorting by texture wouldn't help too much in this specific case.
As well, I want to be able to modify the colors of every individual vertex in the grid in a dynamic way, so that entities can cast colors into the map. From what I am understanding, the way to achieve this would be with a vertex shader. Is this correct?
EDIT: A last thing. If I draw a VBO like that tilemap that is larger than the screen,by translating, does GL automatically cull out-of-view faces or do I need to reform the VBO every time I move the "camera"?

A VBO is just a piece of abstract memory reserved in the graphics memory. You can place data in any layout and arrangement as you like. You can use a single VBO to store several independent meshes. gl{Vertex,Normal,TexCoord,Color,Attrib}Pointer functions are used to set the offset into memory, that means either process address space or offset into the bound VBO.
Furthermore once can easily draw only subsets of the bound data with either glDrawArrays and glDrawElements by choosing approriate first element or indices in the index buffer.
So, no, you don't have to draw entire VBOs.

I actually answered my own question. I needed to separate the map in two: blocks that have empty space directly on top, and then the rest. Effects will be drawn in two passes, "regular" and "top" "layer"
I feel pretty bad about having an useless question lying around though, so if some admin needs to purge it, please go ahead.

Related

Is there a specific drawing order in PCL?

I am trying to write a PCL document, which has several drawing objects (lines, rectangles, texts...)
I found that if I draw the rectangles before anything else, they appear in the right position and size. However, if I draw them among the rest of objects, they are drawn smaller and in the wrong place.
The PCL seems to be OK (although this is yet to be proven), but it has made me think that perhaps graphic objects must be drawn in a particular order (I am using HPGL/2, by the way).
Does someone know if this is so? I have not been able to find anything in the PCL Manual nor in the internet (which leads me to believe that there is not such drawing order).
Perhaps you have written position or scale commands that unintentionally affect your rectangles.

Determining if a polygon is inside the viewing frustum

here are my questions. I heard that opengl ignores the vertices which are outside the viewing frustum and doesn't consider them in rendering pipeline. Recently I ran into a same post that said you should check this your self and if a point is not inside, it is you duty to find out not opengl's! Now,
Is this true about opengl? does it understand if a point is not inside, and not to render it?
I am developing a grass scene which has about 4000 grasses on rectangles. I have awful FPS, and the only solution I came up was to decide which grasses are inside the viewport and then only render them! My question here is that what solution is best for me to find out which rectangle is not inside or which one is?
Please consider that my question is not about points mainly but about rectangles. Also I need to sort the grasses based on their distance, so it is better if native on client side memory.
Please let me know if there are any effective and real-time ways to find out if any given mesh is inside or outside the frustum. Thanks.
Even if is true then OpenGL does not show polygons outside the frustum ( as any other 3d engines ) it has to consider them to check if there are inside or not and then fps slow down. Usually some smart optimization algorithm is needed to avoid flooding the scene with invisible objects. Check for example BSP trees+PVS or Portals as a starting point.
To check if there is some bottleneck in the application, you can try with gDebugger. If nothing is reasonable wrong optimizing in order to draw just the PVS ( possible visible set ) is the way to go.
OpenGL won't render pixels ("fragments") outside your screen, so it has to clip somehow...
More precisely :
You submit your geometry
You make a Draw Call (glDrawArrays or glDrawElements)
Each vertex goes through the vertex shader, which computes the final position of the vertex in camera space. If you didn't write a vertex shader (=old opengl), the driver will create one for you.
The perspective division transforms these coordinates in Normalized Device Coordinates. Roughly, its means that the frustum of your camera is deformed to fit in a [-1,1]x[-1,1]x[-1,1] box
Everything outside this box is clipped. This can mean completely discarding a triangle, or subdivide it if it is across a clipping plane
Each remaining triangle is rasterized into fragments
Each fragment goes through the fragment shader
So basically, OpenGL knows how to clip, but each vertex still has to go through the vertex shader. So submitting your entire world will work, of course, but if you can find a way not to submit everything, your GPU will be happier.
This is a tradeoff, of course. If you spend 10ms checking each and every patch of grass on the CPU so that the GPU has only the minimal amount of data to draw, it's not a good solution either.
If you want to optimize grass, I suggest culling big patches (5m x 5m or so). It's standard AABB-frustum testing.
If you want to optimize a more generic model, you can investigate quadtree for "flat" models, octrees and bsp-trees for more complex objects.
Yes, OpenGL does not rasterize triangles outsize the viewing frustrum. But, this doesn't mean that this is optimal for applications: OpenGL implementation shall transform the vertex coordinate (by using fixed pipeline or vertex shaders), then, having the normalized coordinates it finally knows whether the triangle lie inside the viewing frustrum.
This mean that no pixel is rasterized in that cases, but the vertex data is processed all the same; simply doesn't produce fragments derived from a non visible triangle!
The OpenGL extension ARB_occlusion_query may help you, but in the discussion section make it clear:
Do occlusion queries make other visibility algorithms obsolete?
No.
Occlusion queries are helpful, but they are not a cure-all. They
should be only one of many items in your bag of tricks to decide
whether objects are visible or invisible. They are not an excuse
to skip frustum culling, or precomputing visibility using portals
for static environments, or other standard visibility techniques.
For the question regarding the mesh sorting on depth, you shall use the depth buffer: essentially the mesh fragment is effectively rendered only if its distance from the viewport is less than the previous fragment in the same position. This make you aware of sorting meshes. This buffer is essentially free, and it allows you to improve performances since it discard more far fragments.
Yes. Like others have pointed out, OpenGL has to perform a lot of per-vertex operations to determine if it is in the frustum. It must do this for every vertex you send it. In addition to the processing overhead that must take place, keep in mind that there is also additional overhead in the transmission of those vertices from the CPU to the GPU. You want to avoid sending information to the GPU that it isn't going to use. Though the bandwidth between the CPU and GPU is quite good on modern hardware, there's still a limit.
What you want is a Scene Graph. Scene graphs are frequently implemented with some kind of spatial partitioning scheme, e.g., Quadtrees, Octrees, BSPTrees, etc etc. Spatial partitioning allows you to intelligently determine what geometries are visible. Instead of doing this on a per-vertex basis (like OpenGL is forced to do) it can eliminate huge spatial subsets of geometry at a time. When rendering a complex scene, the performance savings can be enormous.

How do I use OpenGL 3.x VBOs to render a dynamic world?

Although there seem to be very few up to date references for OpenGL 3.x itself, the actual low level manipulation of OpenGL is relatively straight forward. However I am having serious trouble trying to even conceptualise how one would manipulate VBOs in order to render a dynamic world.
Obviously the immediate mode ways of old are non applicable, but from there where do I go? Do I write some kind of scene structure and then convert that to a set of vertices and stream that to the VBO, how would I store translation data? If so how would that look code wise?
Basically really unsure how to continue.
If your entire world is truly dynamic, you can use the GL_STREAM_DRAW_ARB usage flag and reset the data on each frame. Don't bother manipulating it, just try to stream as efficient as possible.
However, I assume that you have a scene that consists of multiple rigid objects that move relative to each other. In this case, use one VBO for each object and specify the GL_STATIC_DRAW_ARB usage flag. You can then set the modelview transform for each instance of an object and render them using one draw call per instance.
A rule of thumb (on the PC) is to issue not more than one draw call per MHz of your CPU. This is a crude estimate, but there's some truth to it. Don't worry about putting multiple independent objects into a single VBO or other performance tricks if you stay below this limit.
Short answer:
Use glMapBufferRange and only update the subrange that needs modification.
Long answer:
The trick is to map the already existing buffer with glMapBufferRange, and then only map the range you need. Given these assumptions:
Your geometry uses per-vertex animation morphing
The vertex count for models is constant during animation.
Then you can use glMapBufferRange to update only the changing parts, and leave the rest of the data alone. Full uploads using glBufferData are slow as a turtle, because they delete the old memory store and allocates a new one. That's in addition to uploading the new data. glMapBufferRange only lets you read/write existing data, it does no allocation or deallocation.
However, if you use skeleton animation, rather pass vertex transformations as 4x4 matrices per-vertex to the vertex shader, and do the calculations there. Per-vertex data is of course specified with glVertexAttribPointer.
Also, remember that you can read texture data in the vertex shader, and that OpenGL 3.1 introduced some new instance draw calls; glDrawArraysInstanced and glDrawElementsInstanced. Those combined can be used for instance-specific lookups. I.e you can do instance draw calls with the same geometry data bound, but send positions or whatever per-vertex data you need as textures or texture-arrays. This can save you from mixing and matching different vertex array data sets.
Imagine if you want to render 100 instances of the same model, but with different positions or color schemes. Or even texture maps.
Using VBOs doesn't mean you have to render your entire scene with only single draw call. You can still issue multiple draw calls, and set up different transformation matrices along the way.
For example, if you're using a scenegraph, each model in the scenegraph can correspond to a single draw call. In such a case, the easiest way to use VBOs is creating a separate VBO for each model.
As an optimization, you might be able to combine several models into a single VBO, then pass in non-zero offsets when making your draw calls; this plucks out the correct model from the VBO. It's also desirable to combine multiple draw calls into a single draw call, but that's not possible if they need to have independent transforms. (Actually it is possible in certain situations if you use instancing or vertex blending, but I suggest getting the basics out of the way first.)

How can I manage a cache texture in OpenGL?

I am writing a text renderer for an OpenGL application. Size, colour, font face, and anti-aliasing can be twiddled at run time (and so multiple font faces can appear on the screen at once). There are too many combinations to allocate one texture to each combination of string and attributes. However, only a small subset of the entire database of strings will be on the screen at any given time.
This leads me into the opportunity to create a cache for the strings that are being printed frame after frame. It has been mandated that I use only one texture for the entire operation, as creating a cache of many textures would incur a texture swapping penalty for every different string printed from the cache.
So I have before me a 2048x2048 texture, into which I can place whatever strings I can fit as they are being requested by the application for caching purposes. I have quickly realized that tracking the free space available in a two dimensional space is not trivial.
I have been looking at things like Best Fit and Next fit, but those seem to be suitable for 1d spaces.
How can I manage this cache texture in OpenGL?
Edit: I have since learned that this is an instance of a "2d packing problem".
What you have is the bin-packing problem.
Bad news first: It's NP-hard, so it's worth to find the optimal solution.
I've done such texture-caching for fonts as well. I didn't cached entire words but just the glyph images. That makes things a lot easier because all your images are roughly square-shaped. A simple grid based approach to keep track of the texture-memory worked pretty good.
In case I got glyphs that are larger than one of my grid-boxes I just allocated two or more boxes using brute force search (it didn't happend that often). In case I didn't found any suitable block I just randomly removed some glyphs from the cache to make free space.
That was much easier than keeping things in a last recently used cache and performed nearly as good.
Btw - you will always have some waste on texture memory for such a cache. Unless you're very tight on memory that shouldn't be a problem. You should use a small texture-format (8 bit alpha works well for fonts).
Also: If you make your grid-blocks a multiple of 8 pixels, and you can drop your antialiasing to 4 bits you can compress the glyphs into one of the compressed DXT or S3TC formats on the fly. The wasted texture-space becomes a non-issue that way.
If you are short on texture memory you could take a look at "Distance Field" or "Signed Distance Field" font rendering technique. You could use 512x512 texture per font family and you could render perfectly antialiased text of any size.
For that algorithm you need to generate a special texture, which contains distance from the texel to the edge of the texture. Take a look at original paper by Valve guys: http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf . There are some frameworks which utilize this. For instance latest version of Qt uses signed distance field for text rendering.
I have opted to use a simple approach. Divide the texture into variable height rows. The first texture to be placed in a row decides the height of the row. If a texture can fit into an existing row by height, check to see if there is enough width remaining and place it there. Otherwise start a new row. If a new row cannot be started, do not cache the string.

WPF 3D Billboards

In a 3D scene we often need to apply labels (little textelements or icons) next to 3D object that is moving around (rotation, translation) in the scene. These labels should always face the camera but still move with the object. This technique I believe is called billboard.
An additional cool feature would be if the label would stay always at the same size - no matter how far away the associated object is. So the label seems to live in 2D screenspace and not in the 3D scenegraph.
Does anyone figures out a clever way how to do this in WPF?
For billboarding you need to make sure that the face normal is pointing towards the camera. The algorithm is that the dot product between the face normal and the view direction should be -1 (minus one).
I have some old C code that does this, but it's probably not particularly useful.
For keeping the object the same size you'd need to work out the screen size and then apply a transform to keep it the constant size you desired.
However, if you want the object to appear as though it's in 2D space, why not draw it in a 2D overlay? This will solve both the billboarding and scaling problem at the same time. You work out the screen location of your label and then use the 2D drawing functions.

Resources