Generating a orthographic MVP matrix for a global shadow map - c

I am making a voxel-based game engine, and I am in the process of implementing shadow maps. The shadows will be directional, and affect the whole scene. The map here is 40x40 units big (from a top-down perspective, since the map is generated from a heightmap).
My plan is to put the directional light in the middle of the scene, and then change the direction of the light as needed. I do not want to recompute the shadow map according to the player's position because that requires rerendering it each frame, and I want to run this engine on very low-power hardware.
Here is my problem: I am trying to calculate the model-view-projection matrix for this world. Here is my current work, using the cglm math library:
const int light_height = 25, map_width = map_size[0], map_height = map_size[1];
const int half_map_width = map_width >> 1, half_map_height = map_height >> 1;
const float near_clip_dist = 0.01f;
// Matrix calculations
mat4 view, projection, model_view_projection;
glm_look_anyup((vec3) {half_map_width, light_height, half_map_height}, light_dir, view);
glm_ortho(-half_map_width, half_map_width, half_map_height, -half_map_height, near_clip_dist, light_height, projection);
glm_mul(projection, view, model_view_projection);
Depending on the light direction, sometimes some part of the scene are not in shadow when they should be. I think that the problem is with my call to glm_ortho, since the documentation says that the orthographic projection matrix corners should be in terms of the viewport. In that case, how do I transform world-space coordinates to view-space coordinates to calculate this matrix correctly?

Related

Leaflet JS: Custom 2D projection that uses meters instead of lat,long

I am working on a custom game map. This map is basically a raster image, overlayed with some paths and markers. I want to use Leaflet to display the map.
What I am struggling with, is that Leaflet uses Latitude and Longitude to calculate positions, while it uses meters for distances (path lengths, radii of circles, etc).
This is very understandable when dealing with a spherical world like our Earth, but it complicates the custom map, which is flat a lot.
I would like to be able to specify the positions in the same unit as the distances.
Now, by default Leaflet uses a Spherical Mercator projection. According to the Docs, it is possible to define your own projections and coordinate reference systems, but I have been unable to do this thus far.
How would this be possible? Or is there a simpler way?
You should take a look at the simple coordinate reference system (L.CRS.Simple) included with Leaflet:
A simple CRS that maps longitude and latitude into x and y directly. May be used for maps of flat surfaces (e.g. game maps).
You can define the CRS of your L.Map instead upon initialization like so:
new L.Map('myDiv', {
crs: L.CRS.Simple
});
Some further elaboration: As #ghybs pointed out in the comment below and the comment to your question the default sperical mercator projection (L.CRS.EPSG3857) already works in meters. When you calculate the distance between two coordinates, Leaflet returns meters, example:
var startCoordinate = new L.LatLng(0, -1);
var endCoordinate = new L.LatLng(0, 1);
var distance = startCoordinate.distanceTo(endCoordinate);
console.log(distance);
The above will print 222638.98158654713 to your console which is the distance between those two coordinates in meters. Problem is that when using spherical projection, distance between two coordinates will become less the further you get from the equator which will become problematic when creating a flat gameworld. That's why you should use L.CRS.Simple, you won't have said problem.

Source engine styled rope rendering

I am creating a 3D graphics engine and one of the requirements is ropes that behave like in Valve's source engine.
So in the source engine, a section of rope is a quad that rotates along it's direction axis to face the camera, so if the section of rope is in the +Z direction, it will rotate along the Z axis so it's face is facing the camera's centre position.
At the moment, I have the sections of ropes defined, so I can have a nice curved rope, but now I'm trying to construct the matrix that will rotate it along it's direction vector.
I already have a matrix for rendering billboard sprites based on this billboarding technique:
Constructing a Billboard Matrix
And at the moment I've been trying to retool it so that Right, Up, Forward vector match the rope segment's direction vector.
My rope is made up of multiple sections, each section is a rectangle made up of two triangles, as I said above, I can get the position and sections perfect, it's the rotating to face the camera that's causing me a lot of problems.
This is in OpenGL ES2 and written in C.
I have studied Doom 3's beam rendering code in Model_beam.cpp, the method used there is to calculate the offset based on normals rather than using matrices, so I have created a similar technique in my C code and it sort of works, at least it, works as much as I need it to right now.
So for those who are also trying to figure this one out, use the cross-product of the mid-point of the rope against the camera position, normalise that and then multiply it to how wide you want the rope to be, then when constructing the vertices, offset each vertex in either + or - direction of the resulting vector.
Further help would be great though as this is not perfect!
Thank you
Check out this related stackoverflow post on billboards in OpenGL It cites a lighthouse3d tutorial that is a pretty good read. Here are the salient points of the technique:
void billboardCylindricalBegin(
float camX, float camY, float camZ,
float objPosX, float objPosY, float objPosZ) {
float lookAt[3],objToCamProj[3],upAux[3];
float modelview[16],angleCosine;
glPushMatrix();
// objToCamProj is the vector in world coordinates from the
// local origin to the camera projected in the XZ plane
objToCamProj[0] = camX - objPosX ;
objToCamProj[1] = 0;
objToCamProj[2] = camZ - objPosZ ;
// This is the original lookAt vector for the object
// in world coordinates
lookAt[0] = 0;
lookAt[1] = 0;
lookAt[2] = 1;
// normalize both vectors to get the cosine directly afterwards
mathsNormalize(objToCamProj);
// easy fix to determine wether the angle is negative or positive
// for positive angles upAux will be a vector pointing in the
// positive y direction, otherwise upAux will point downwards
// effectively reversing the rotation.
mathsCrossProduct(upAux,lookAt,objToCamProj);
// compute the angle
angleCosine = mathsInnerProduct(lookAt,objToCamProj);
// perform the rotation. The if statement is used for stability reasons
// if the lookAt and objToCamProj vectors are too close together then
// |angleCosine| could be bigger than 1 due to lack of precision
if ((angleCosine < 0.99990) && (angleCosine > -0.9999))
glRotatef(acos(angleCosine)*180/3.14,upAux[0], upAux[1], upAux[2]);
}

OpenGL saving objects for later drawing

I'm making a program using openGL with transparent objects in it, so obviously I have to paint those last. Unfortunately I was unaware of this requirement when beginning this project and now it would be a real pain to reorder it painting those at last.
I'm drawing objects by calling my drawing functions after translating and rotating the scene. There can be multiple translations and rotations before an actual drawing (e.g. first I draw the ground, then translate, then call the drawing of the house, which repeatedly translates and rotates, then calls the drawing of the walls and so on).
So my idea was saving the current modelview matrices in a list instead of painting the transparent objects when I normally would, then when I'm done with the opaque stuff, I iterate through my list and load each matrix and paint each object (a window, to be precise).
I do this for saving a matrix:
GLdouble * modelMatrix = (GLdouble *)malloc(16 * sizeof(GLdouble));
glGetDoublev(GL_MODELVIEW, modelMatrix);
addWindow(modelMatrix); // save it for later painting
And this is the "transparent stuff management" part:
/***************************************************************************
*** TRANSPARENT STUFF MANAGEMENT ******************************************
**************************************************************************/
typedef struct wndLst {
GLdouble * modelMatrix;
struct wndLst * next;
} windowList;
windowList * windows = NULL;
windowList * pWindow;
void addWindow(GLdouble * windowModelMatrix) {
pWindow = (windowList *)malloc(sizeof(windowList));
pWindow->modelMatrix = windowModelMatrix;
pWindow->next = windows;
windows = pWindow;
}
void clearWindows() {
while(windows != NULL) {
pWindow = windows->next;
free(windows->modelMatrix);
free(windows);
windows = pWindow;
}
}
void paintWindows() {
glPushMatrix(); // I've tried putting this and the pop inside the loop, but it didn't help either
pWindow = windows;
while(pWindow != NULL) {
glLoadMatrixd(pWindow->modelMatrix);
Size s;
s.height = 69;
s.width = 49;
s.length = 0.1;
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glDepthMask(GL_FALSE);
glColor4f(COLOR_GLASS, windowAlpha);
drawCuboid(s);
glDepthMask(GL_TRUE);
glDisable(GL_BLEND);
pWindow = pWindow->next;
}
glPopMatrix();
}
/* INTERFACE
* paint all the components, that are not yet painted,
* then clean up.
*/
void flushComponents() {
paintWindows();
clearWindows();
}
/**************************************************************************/
I call flushComponents(); at the very end of my drawings.
The problem is, that the windows don't get in their place, instead I get weird-shaped blue objects randomly appearing and disappearing in my scene.
Am I doing something wrong? Or such matrix manipulations cannot even be used like this? Then what other method could I use for doing this?
Here is the full code if you need it: farm.zip Matrix-saving is at components.c line 1548, management is at line 142. It might not work on Windows without some minor hacking with the includes, which should probably be done in global.h.
Edit: I can only use C code and the glut library to write this program.
Edit 2: The problem is glGetDoublev not returning anything for some reason, it leaves the modelMatrix array intact. Though I still have no idea what causes this, I could make a workaround using bernie's idea.
OpenGL is not a math library. You should not use it for doing matrix calculations. In fact that part has been completely removed from OpenGL-3. Instead you should rely on a specialized matrix math library. That allows you to calculate the matrices for each object with ease, without jumping through the hoops of OpenGL's glGet… API (which was never meat for this kind of abuse). For a good replacement look at GLM: http://glm.g-truc.net/
Try adding glMatrixMode(GL_MODELVIEW) before your paintWindows() method. Perhaps you are not modifying the correct matrix.
The basic idea of your method is fine and is very similar to what I used for transparent objects. I would however advise for performance reasons against reading back the matrices from OpenGL. Instead, you can keep a CPU version of the current modelview matrix and just copy that to your window array.
As for your comment about push and pop matrix, you can safely put it outside the loop like you did.
edit
Strictly speaking, your method of rendering transparent objects misses one step: before rendering the list of windows, you should sort them back to front. This allows for overlapping windows to have the correct final color. In general, for 2 windows and a blending function:
blend( blend(scene_color, window0_color, window0_alpha), window1_color, window1_alpha )
!=
blend( blend(scene_color, window1_color, window1_alpha), window0_color, window0_alpha )
However, if all windows consist of the exact same uniform color (e.g. plain texture or no texture) and alpha value, the above equation is true (window0_color==window1_color and window1_alpha==window0_alpha) so you don't need to sort your windows. Also, if it's not possible to have overlapping windows, don't worry about sorting.
edit #2
Now you found something interesting with the erroneous matrix readback. Try it out with the following instead (you certainly don't need double precision):
GLfloat* modelMatrix = (GLfloat*)malloc(16 * sizeof(GLfloat)); // glass
glGetFloatv(GL_MODELVIEW, modelMatrix);
addWindow(modelMatrix); // save it for later painting
If that still doesn't work, you could directly store references to your houses in your transparent object list. During the transparent rendering pass, re-render each house, but only issue actual OpenGL draw calls for the transparent parts. In your code, putWallStdWith would take another boolean argument specifying whether to render the transparent or the opaque geometry. This way, your succession of OpenGL matrix manipulation calls would be redone for the transparent parts instead of read using glGetxxx(GL_MODEL_VIEW).
The correct way to do it however is to do matrix computation on the CPU and simply load complete matrices in OpenGL. That allows you to reuse matrices, control the operation precision, see the actual matrices easily, etc.

What vertex shader code should be used for a pixel shader used for simple 2D SpriteBatch drawing in XNA?

Preface
First of all, why is a vertex shader required for a SilverlightEffect (.slfx file) in Silverlight 5? I'm trying to port a simple 2D XNA game to Silverlight 5 RC, and I would like to use a basic pixel shader. This shader works great in XNA for Windows and Xbox, but I can't get it to compile with Silverlight as a SilverlightEffect.
The MS blog for the Silverlight Toolkit says that "there is no difference between .slfx and .fx", but apparently this isn't quite true -- or at least SpriteBatch is working some magic for us in "regular XNA", and it isn't in "Silverlight XNA".
If I try to directly copy my pixel shader file into a Silverlight project (and change it to the supported "Effect - Silverlight" importer/processor), when I try to compile I see the following error message:
Invalid effect file. Unable to find vertex shader in pass "P0"
Indeed, there isn't a vertex shader in my pixel shader file. I haven't needed one with my other 2D XNA apps since I'm just doing basic SpriteBatch drawing.
I tried adding a vertex shader to my shader file, using Remi Gillig's comment on this Shawn Hargreaves blog post for guidance, but it doesn't quite work. The shader file successfully compiles, and I see some semblance of my game on screen, but it's tiny, twisted, repeated, and all jumbled up. So clearly something's not quite right.
The Real Question
So that brings me to my real question: Since a vertex shader is required, is there a basic vertex shader function that works for simple 2D SpriteBatch drawing?
And if the vertex shader requires world/view/project matricies as parameters, what values am I supposed to use for a 2D game?
Can any shader pros help? Thanks!
In XNA, SpriteBatch is providing its own vertex shader. Because there is no SpriteBatch in Silverlight 5, you have to provide your own.
You may want to look at the source code for the shader that SpriteBatch uses (it's in SpriteEffect.fx). Here is its vertex shader, it's pretty simple:
void SpriteVertexShader(inout float4 color : COLOR0,
inout float2 texCoord : TEXCOORD0,
inout float4 position : SV_Position)
{
position = mul(position, MatrixTransform);
}
So now you just need the input position and the transformation matrix (and the texture co-ordinates, but those are fairly simple).
The same blog post you linked, towards the end, tells you how to set up the matrix (also here). Here it is again:
Matrix projection = Matrix.CreateOrthographicOffCenter(0, viewport.Width, viewport.Height, 0, 0, 1);
Matrix halfPixelOffset = Matrix.CreateTranslation(-0.5f, -0.5f, 0);
Matrix transformation = halfPixelOffset * projection;
(Note: I'm not sure if Silverlight actually requires the half-pixel offset to maintain texel alignment (ref). It probably does.)
The tricky bit is that the positioning of the sprite is done outside of the shader, on the CPU. Here's the order of what SpriteBatch does:
Start with four vertices in a rectangle, with (0,0) being the top-left and the texture's (width,height) as the bottom right
Translate backwards by origin
Scale
Rotate
Translate by position
This places the sprite vertices in "client space", and then the transformation matrix transforms those vertices from client space into projection space (which is used for drawing).
I've got a highly-inlined version of this transformation in ExEn (SpriteBatch.InternalDraw in SpriteBatchOpenGL.cs) that you could easily adapt for your use.
I came across this question today and I want to add that you can do all of the calculations in the vertex shader itself:
float2 Viewport; //Set to viewport size from application code
void SpriteVertexShader(inout float4 color : COLOR0,
inout float2 texCoord : TEXCOORD0,
inout float4 position : POSITION0)
{
// Half pixel offset for correct texel centering.
position.xy -= 0.5;
// Viewport adjustment.
position.xy = position.xy / Viewport;
position.xy *= float2(2, -2);
position.xy -= float2(1, -1);
}
(I didn't write this, credit goes to the comments in the article mensioned above)

With the pose between two images, how do you project a point from one scene into another?

If you have the full relative-3D values of two images looking at the same scene (relative x,y,z), along with the extrinsic/intrinsic parameters between them, how do you project the points from one scene into the other scene, in opencv?
You can't do that in general. There is an infinite number of 3D points (a line in 3d) that get mapped to one point in image space, in the other image this line won't get mapped to a single point, but a line (see the wikipedia article on epipolar geometry). You can compute the line that the point has to be on with the fundamental matrix.
If you do have a depth map, reproject the point into 3D - using the equations on the top of the opencv page on camera calibration, especially this one (it's the only one you need):
u and v are your pixel coordinates, the first matrix is your camera matrix (for the image you are looking at currently), the second one is the matrix containing the extrinsic parameters, Z you know (from your depth map), X and Y are the ones you are looking for - you can solve for those parameters, and then use the same equation to project the point into your other camera. You can probably use the PerspectiveTransform function from opencv to do the work for you, however I can't tell you from the top of my head how to build the projection matrix.
Let the extrinsic parameters be R and t such that camera 1 is [I|0] and camera 2 is [R|t]. So all you have to do is rotate and the translate point cloud 1 with R and t to have it in the same coordinate system as point cloud 2.
Let the two cameras have projection matrices
P1 = K1 [ I | 0]
P2 = K2 [ R | t]
and let the depth of a given point x1 (homogeneous pixel coordinates) on the first camera be Z, the mapping to the second camera is
x2 = K2*R*inverse(K1)*x1 + K2*t/Z
There is no OpenCV function to do this. If the relative motion of the cameras is purely rotational, the mapping becomes a homography so you can use the PerspectiveTransform function.
( Ki = [fxi 0 cxi; 0 fyi cyi; 0 0 1] )

Resources