Optimizing drawing cube with multiple textures - silverlight

I am trying to optimize drawing a cube with 3 different textures. An effect I want to achieve is:
What I am doing now is drawing cube using three Draw() calls:
graphicsDevice.Textures[0] = cube.frontTexture;
graphicsDevice
.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0,
36, 0, 2);
graphicsDevice.Textures[0] = cube.backTexture;
graphicsDevice
.DrawIndexedPrimitives(PrimitiveType.TriangleList, 6, 0,
30, 0, 2);
graphicsDevice.Textures[0] = cube.sideTexture;
graphicsDevice
.DrawIndexedPrimitives(PrimitiveType.TriangleList, 12, 0,
24, 0, 8);
Then my texture is processed in pixel shader I sample my texture:
texture Texture;
sampler textureSampler : register(s0) = sampler_state {
Texture = (Texture);
Filter = MIN_MAG_MIP_POINT;
AddressU = Wrap;
AddressV = Wrap;
};
and produce output:
return tex2D(textureSampler, texCoord); // I have my texCoords from vertex shader output
Unfortunately in my scene there are hundreds of similar cubes with different textures, as well as other objects and it has bad influence on FPS rate. What I noticed is that I can sample in my pixel shader more than one texture:
graphicsDevice.Textures[0] = cube.frontTexture;
graphicsDevice.Textures[1] = cube.backTexture;
graphicsDevice.Textures[2] = cube.sideTexture;
Can I somehow stick each texture to proper face of cuboid in my pixel shader, in order to draw it in one Draw() call? I use Silverlight 5.0, but any answers also concerning XNA, or MonoGames will be appreciated :)

You could take all 3 bitmaps that make up the 3 textures and add them to one big bitmap. The result would be one big texture or texture atlas. Then you just set the UV for each face to the appropriate parts of the large texture.
In this way, there is only ever one bound texture and no need to perform expensive texture context switches.
It's a common practice. It's popular too for sprite sheets.

As #Micky says, it's good practice to use sprite sheets instead of thousands separate textures. In my case I'm want to have separate textures in file system, but few textures in the game, so I write special class to compose small textures in big sprite sheets and recalculate texture coords.
There is lot of code, so I'll better provide a link to sources.
Textures packing when game starting. If you don't mind about few seconds of processing, you can use it for your sprites.

Related

Generating a orthographic MVP matrix for a global shadow map

I am making a voxel-based game engine, and I am in the process of implementing shadow maps. The shadows will be directional, and affect the whole scene. The map here is 40x40 units big (from a top-down perspective, since the map is generated from a heightmap).
My plan is to put the directional light in the middle of the scene, and then change the direction of the light as needed. I do not want to recompute the shadow map according to the player's position because that requires rerendering it each frame, and I want to run this engine on very low-power hardware.
Here is my problem: I am trying to calculate the model-view-projection matrix for this world. Here is my current work, using the cglm math library:
const int light_height = 25, map_width = map_size[0], map_height = map_size[1];
const int half_map_width = map_width >> 1, half_map_height = map_height >> 1;
const float near_clip_dist = 0.01f;
// Matrix calculations
mat4 view, projection, model_view_projection;
glm_look_anyup((vec3) {half_map_width, light_height, half_map_height}, light_dir, view);
glm_ortho(-half_map_width, half_map_width, half_map_height, -half_map_height, near_clip_dist, light_height, projection);
glm_mul(projection, view, model_view_projection);
Depending on the light direction, sometimes some part of the scene are not in shadow when they should be. I think that the problem is with my call to glm_ortho, since the documentation says that the orthographic projection matrix corners should be in terms of the viewport. In that case, how do I transform world-space coordinates to view-space coordinates to calculate this matrix correctly?

Absolute positioning in OpenGL with C

(Using OpenGL, GLUT, GLU, and C)
I am trying to create a 3D game in C, and I have the camera movement, collision detection and all of the main stuff ready, however I have failed at the first hurdle. To create my rectangles I am using
glutSolidCube (2.0);
And I know about tranformations and scale and rotations, however I am looking for how to place it in a precise location. Say I had a 3D space, with XYZ. Say I had the camera at 5,5,20, looking towards 0,0,0 (So at an angle) and wanted to place a Cube at 5,2,10, and then another at -5,-2,20. How would I use these absolute positions? Also, how would I use absolute sizes, so say I wanted the one at -5,-2,20 to be 20,5,10 in size. How would I do this in OpenGL?
You'll have to use the functions:
glTranslatef()
glRotatef()
glScalef()
Additionally, also learn these:
glPushMatrix()
glPopMatrix()
Read the OpenGL reference for details.
First forget about glutSolidCube. GLUT is not a part of OpenGL, it's just a small convenience library for it.
You must understand the OpenGL only deals with points, lines and tranangles. And it doesn't maintain a scene, but its merely drawing points, lines and triangles. Each on its own without any connotation of topology. Also OpenGL should not be confused for some math library. The functions glTranslate, glRotate, glScale and so on are a pure legacy and have been removed from contemporary OpenGL versions.
That being said...
Say I had the camera at 5,5,20, looking towards 0,0,0 (So at an angle) and wanted to place a Cube at 5,2,10, and then another at -5,-2,20. How would I use these absolute positions? Also, how would I use absolute sizes, so say I wanted the one at -5,-2,20 to be 20,5,10 in size. How would I do this in OpenGL?
I'll go along with what you already know (which mans old OpenGL-1.1 and GLUT):
void draw()
{
/* Viewport and projection really should be set in the
drawing handler. They don't belong into the reshape. */
glViewport(...);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
your_projection();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(5, 5, 20, 0, 0, 0, 0, 1, 0);
glPushMatrix();
glTranslatef(5, 2, 10);
draw_cube();
glPopMatrix();
glPushMatrix();
glTranslatef(-5, -2, 20);
draw_cube();
glPopMatrix();
glPushMatrix();
glTranslatef(-5, -2, 20);
glScalef(20, 5, 10);
draw_cube();
glPopMatrix();
}

What vertex shader code should be used for a pixel shader used for simple 2D SpriteBatch drawing in XNA?

Preface
First of all, why is a vertex shader required for a SilverlightEffect (.slfx file) in Silverlight 5? I'm trying to port a simple 2D XNA game to Silverlight 5 RC, and I would like to use a basic pixel shader. This shader works great in XNA for Windows and Xbox, but I can't get it to compile with Silverlight as a SilverlightEffect.
The MS blog for the Silverlight Toolkit says that "there is no difference between .slfx and .fx", but apparently this isn't quite true -- or at least SpriteBatch is working some magic for us in "regular XNA", and it isn't in "Silverlight XNA".
If I try to directly copy my pixel shader file into a Silverlight project (and change it to the supported "Effect - Silverlight" importer/processor), when I try to compile I see the following error message:
Invalid effect file. Unable to find vertex shader in pass "P0"
Indeed, there isn't a vertex shader in my pixel shader file. I haven't needed one with my other 2D XNA apps since I'm just doing basic SpriteBatch drawing.
I tried adding a vertex shader to my shader file, using Remi Gillig's comment on this Shawn Hargreaves blog post for guidance, but it doesn't quite work. The shader file successfully compiles, and I see some semblance of my game on screen, but it's tiny, twisted, repeated, and all jumbled up. So clearly something's not quite right.
The Real Question
So that brings me to my real question: Since a vertex shader is required, is there a basic vertex shader function that works for simple 2D SpriteBatch drawing?
And if the vertex shader requires world/view/project matricies as parameters, what values am I supposed to use for a 2D game?
Can any shader pros help? Thanks!
In XNA, SpriteBatch is providing its own vertex shader. Because there is no SpriteBatch in Silverlight 5, you have to provide your own.
You may want to look at the source code for the shader that SpriteBatch uses (it's in SpriteEffect.fx). Here is its vertex shader, it's pretty simple:
void SpriteVertexShader(inout float4 color : COLOR0,
inout float2 texCoord : TEXCOORD0,
inout float4 position : SV_Position)
{
position = mul(position, MatrixTransform);
}
So now you just need the input position and the transformation matrix (and the texture co-ordinates, but those are fairly simple).
The same blog post you linked, towards the end, tells you how to set up the matrix (also here). Here it is again:
Matrix projection = Matrix.CreateOrthographicOffCenter(0, viewport.Width, viewport.Height, 0, 0, 1);
Matrix halfPixelOffset = Matrix.CreateTranslation(-0.5f, -0.5f, 0);
Matrix transformation = halfPixelOffset * projection;
(Note: I'm not sure if Silverlight actually requires the half-pixel offset to maintain texel alignment (ref). It probably does.)
The tricky bit is that the positioning of the sprite is done outside of the shader, on the CPU. Here's the order of what SpriteBatch does:
Start with four vertices in a rectangle, with (0,0) being the top-left and the texture's (width,height) as the bottom right
Translate backwards by origin
Scale
Rotate
Translate by position
This places the sprite vertices in "client space", and then the transformation matrix transforms those vertices from client space into projection space (which is used for drawing).
I've got a highly-inlined version of this transformation in ExEn (SpriteBatch.InternalDraw in SpriteBatchOpenGL.cs) that you could easily adapt for your use.
I came across this question today and I want to add that you can do all of the calculations in the vertex shader itself:
float2 Viewport; //Set to viewport size from application code
void SpriteVertexShader(inout float4 color : COLOR0,
inout float2 texCoord : TEXCOORD0,
inout float4 position : POSITION0)
{
// Half pixel offset for correct texel centering.
position.xy -= 0.5;
// Viewport adjustment.
position.xy = position.xy / Viewport;
position.xy *= float2(2, -2);
position.xy -= float2(1, -1);
}
(I didn't write this, credit goes to the comments in the article mensioned above)

glVertexAttribPointer, interleaved elements and performance / cache friendliness

So, in the course of writing a model loader for a 3D scene I'm working on, I've decided to pack the vertex, texture and normal data like so:
VVVVTTTNNN
for each vertex, where V = vertex coordinate, T = UV coordinate, and N = normal coordinate. When I pass this data on to the vertex shader for my scene, I make three glVertexAttribPointer calls, like so:
glVertexAttribPointer(ATTRIB_VERTEX, 4, GL_FLOAT, 0, 10, group->vertices.data);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_NORMAL, 3, GL_FLOAT, 0, 10, group->normals.data);
glEnableVertexAttribArray(ATTRIB_NORMAL);
glVertexAttribPointer(ATTRIB_UV_COORDINATES, 3, GL_FLOAT, 0, 10, group->uvcoordinates.data);
glEnableVertexAttribArray(ATTRIB_UV_COORDINATES);
Each of the group pointers being passed refer to the beginning position in the shared vertex data block where that vertex type starts:
group->vertices.data == data
group->uvcoordinates.data == &data[4]
group->normals.data == &data[7]
Part of the reason for me interleaving this data was to program for cache friendliness and minimize data being sent to the card. ( NOTE: This is not for a realistic performance bottleneck. I'm investigating the optimization because I want to learn more about programming to address these sort of concerns. ) However, for the life of me, I can't imagine how GL would be able to infer that the 3 different pointers refer to offset positions within the same larger data block, and thereby make the necessary optimization to avoid copying the data once it has already been copied. Furthermore, since I'm only ensuring data locality in system memory ( and don't really have any guarantees on how that data is going to be organized on the GPU ), I'm only really optimizing for the case where I access any of these vertices outside of GL. Is that right? Are these optimizations mostly useless, or will providing data in this manner help minimize the data transfer to the GPU / prevent cache misses when iterating over vertex data in the vertex shader?
OpenGL is just an API, the intelligence lies in the driver. Anyway the problem is actually rather simple to implement: For every Vertex Attribute you got a starting memory address and when calling glDrawArrays or glDrawElements one looks for the largest index found. That defines the upper bound of the range.
Then you sort the vertex attributes starting addresses and for each address check if it range overlaps with any other vertex attribute range. You find the contiguous regions and copy those.
In the case of Vertex Buffer Objects it's even simpler since you already copied stuff to OpenGL ready for processing.

OpenGL tex-mapping: Whole object is a single texel's color

I've been trying to get a HUD texture to display for a simulator for a while now, without success.
First I bind the texture like this:
glGenTextures(1,&hudTexObj);
gHud = getPPM("textures/purplenebula/hud.ppm",&n,&m,&s);
glBindTexture(GL_TEXTURE_2D,hudTexObj);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
//glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGB,n,m,0,GL_RGB,GL_UNSIGNED_INT, gHud);
And then I attempt to map it to a QUAD, which results in the whole quad being a single brown color, and I want it to use all the texels. Here's how I map:
glBindTexture(GL_TEXTURE_2D,hudTexObj);
glBegin(GL_QUADS);
glTexCoord2f(0.0,0.0);
glVertex2f(0,0);
glTexCoord2f(0.0,1.0);
glVertex2f(0,m);
glTexCoord2f(1.0,1.0);
glVertex2f(n,m);
glTexCoord2f(1.0,0.0);
glVertex2f(n,0);
glEnd();
The weird thing is that I've been able to get the exact above code to display the texture in a program of its own, yet when I put it into my main program it fails. Could it have to do with the texture matrix? I'm dumbfounded at this point.
Stupidly, I had enabled automatic tex coord generation far away in another part of the code. So if you see one texel's color covering a whole image, that is the likely cause.

Resources