What vertex shader code should be used for a pixel shader used for simple 2D SpriteBatch drawing in XNA? - silverlight

Preface
First of all, why is a vertex shader required for a SilverlightEffect (.slfx file) in Silverlight 5? I'm trying to port a simple 2D XNA game to Silverlight 5 RC, and I would like to use a basic pixel shader. This shader works great in XNA for Windows and Xbox, but I can't get it to compile with Silverlight as a SilverlightEffect.
The MS blog for the Silverlight Toolkit says that "there is no difference between .slfx and .fx", but apparently this isn't quite true -- or at least SpriteBatch is working some magic for us in "regular XNA", and it isn't in "Silverlight XNA".
If I try to directly copy my pixel shader file into a Silverlight project (and change it to the supported "Effect - Silverlight" importer/processor), when I try to compile I see the following error message:
Invalid effect file. Unable to find vertex shader in pass "P0"
Indeed, there isn't a vertex shader in my pixel shader file. I haven't needed one with my other 2D XNA apps since I'm just doing basic SpriteBatch drawing.
I tried adding a vertex shader to my shader file, using Remi Gillig's comment on this Shawn Hargreaves blog post for guidance, but it doesn't quite work. The shader file successfully compiles, and I see some semblance of my game on screen, but it's tiny, twisted, repeated, and all jumbled up. So clearly something's not quite right.
The Real Question
So that brings me to my real question: Since a vertex shader is required, is there a basic vertex shader function that works for simple 2D SpriteBatch drawing?
And if the vertex shader requires world/view/project matricies as parameters, what values am I supposed to use for a 2D game?
Can any shader pros help? Thanks!

In XNA, SpriteBatch is providing its own vertex shader. Because there is no SpriteBatch in Silverlight 5, you have to provide your own.
You may want to look at the source code for the shader that SpriteBatch uses (it's in SpriteEffect.fx). Here is its vertex shader, it's pretty simple:
void SpriteVertexShader(inout float4 color : COLOR0,
inout float2 texCoord : TEXCOORD0,
inout float4 position : SV_Position)
{
position = mul(position, MatrixTransform);
}
So now you just need the input position and the transformation matrix (and the texture co-ordinates, but those are fairly simple).
The same blog post you linked, towards the end, tells you how to set up the matrix (also here). Here it is again:
Matrix projection = Matrix.CreateOrthographicOffCenter(0, viewport.Width, viewport.Height, 0, 0, 1);
Matrix halfPixelOffset = Matrix.CreateTranslation(-0.5f, -0.5f, 0);
Matrix transformation = halfPixelOffset * projection;
(Note: I'm not sure if Silverlight actually requires the half-pixel offset to maintain texel alignment (ref). It probably does.)
The tricky bit is that the positioning of the sprite is done outside of the shader, on the CPU. Here's the order of what SpriteBatch does:
Start with four vertices in a rectangle, with (0,0) being the top-left and the texture's (width,height) as the bottom right
Translate backwards by origin
Scale
Rotate
Translate by position
This places the sprite vertices in "client space", and then the transformation matrix transforms those vertices from client space into projection space (which is used for drawing).
I've got a highly-inlined version of this transformation in ExEn (SpriteBatch.InternalDraw in SpriteBatchOpenGL.cs) that you could easily adapt for your use.

I came across this question today and I want to add that you can do all of the calculations in the vertex shader itself:
float2 Viewport; //Set to viewport size from application code
void SpriteVertexShader(inout float4 color : COLOR0,
inout float2 texCoord : TEXCOORD0,
inout float4 position : POSITION0)
{
// Half pixel offset for correct texel centering.
position.xy -= 0.5;
// Viewport adjustment.
position.xy = position.xy / Viewport;
position.xy *= float2(2, -2);
position.xy -= float2(1, -1);
}
(I didn't write this, credit goes to the comments in the article mensioned above)

Related

Generating a orthographic MVP matrix for a global shadow map

I am making a voxel-based game engine, and I am in the process of implementing shadow maps. The shadows will be directional, and affect the whole scene. The map here is 40x40 units big (from a top-down perspective, since the map is generated from a heightmap).
My plan is to put the directional light in the middle of the scene, and then change the direction of the light as needed. I do not want to recompute the shadow map according to the player's position because that requires rerendering it each frame, and I want to run this engine on very low-power hardware.
Here is my problem: I am trying to calculate the model-view-projection matrix for this world. Here is my current work, using the cglm math library:
const int light_height = 25, map_width = map_size[0], map_height = map_size[1];
const int half_map_width = map_width >> 1, half_map_height = map_height >> 1;
const float near_clip_dist = 0.01f;
// Matrix calculations
mat4 view, projection, model_view_projection;
glm_look_anyup((vec3) {half_map_width, light_height, half_map_height}, light_dir, view);
glm_ortho(-half_map_width, half_map_width, half_map_height, -half_map_height, near_clip_dist, light_height, projection);
glm_mul(projection, view, model_view_projection);
Depending on the light direction, sometimes some part of the scene are not in shadow when they should be. I think that the problem is with my call to glm_ortho, since the documentation says that the orthographic projection matrix corners should be in terms of the viewport. In that case, how do I transform world-space coordinates to view-space coordinates to calculate this matrix correctly?

Can a GLSL fragment shader run without a framebuffer and similar inconveniences?

Reapeating the above: Can a GLSL fragment shader run without a framebuffer and any rasterization stage?
This perfect answer gives an insight about where to start with SSBO's. The answer has a link to OpenGL ARB extension that has a boilerplate code. The code works for me if made with some changes to work with OpenGL compute programs. But, I really does not get it, how to do with a fragment program? And without any other buffers than SSBO.
The code clearly has fragment source code without any pixel operations, only SSBO ones.
in vec4 color;
void main()
{
uint fragmentNumber = atomicCounterIncrement(fragmentCounter);
if (fragmentNumber < maxFragmentCount) {
fragments[fragmentNumber].position = ivec2(gl_FragCoord.xy);
fragments[fragmentNumber].color = color;
}
}
And later in the C program file:
// Generate, bind, and specify the data store for the atomic counter.
glGenBuffers(1, &counterBuffer);
glBindBufferBase(GL_ATOMIC_COUNTER_BUFFER, 0, counterBuffer);
glBufferData(GL_ATOMIC_COUNTER_BUFFER, sizeof(GLuint), NULL,
GL_DYNAMIC_DRAW);
// Reset the atomic counter to zero, then draw stuff. This will record
// values into the shader storage buffer as fragments are generated.
GLuint zero = 0;
glBufferSubData(GL_ATOMIC_COUNTER_BUFFER, 0, sizeof(GLuint), &zero);
glUseProgram(program);
glDrawElements(GL_TRIANGLES, ...);
As per my setup, I do not have any output with the means of OpenGL pixels. I wish it to stay so. Is it possible, or am I missing something?
P.S The above setup gives me error invalid framebuffer operation after glDrawElements immediately followed by glFinish.
Update 21.03.2021
There is a Framebuffers with no attachments. The only thing you should set in its state is its width and height. And that is somewhat at the course that anyone's heading, if one wish to minimize setup.
The minus of the aformentioned, is that it is still requires some geometry to be fed to rasterization stage. To start the shader stages, you know. But, as a plus, one gets geometry rasterization, wish it or not.
If I have time, I leave some code as a reminder for miself.
Can a GLSL fragment shader run without a framebuffer and similar inconveniences?
No. The fragment shaders need the step that invokes them. The stage that produce fragments called rasterization.
From the khronos wiki:
A Fragment Shader is the Shader stage that
will process a Fragment generated by the Rasterization
into a set of colors and a single depth value.
The fragment shader is the OpenGL pipeline stage after a primitive is rasterized.
And the rasterization needs a render step to produce fragments. The rendering is done to somewhere.
In OpenGL, it is done to framebuffer. So without a framebuffer, you can not render, hence OpenGL
can not produce fragments.
Setup of a framebuffer can be minimized by
Framebuffers with no attachments.
But one needs to supply geometry and render it, to invoke fragment shaders.
Fragment shaders can read and write to arbitrary SSBO. But the usage is not similar to compute shaders.
Fragment shaders invoke on each produced fragment, and compute shaders can be invoked, as I may say, arbitrary.
Many thanks to all commenters who had pointed me to the obvious, by now, reason why the fragment shaders need a render operation.

Source engine styled rope rendering

I am creating a 3D graphics engine and one of the requirements is ropes that behave like in Valve's source engine.
So in the source engine, a section of rope is a quad that rotates along it's direction axis to face the camera, so if the section of rope is in the +Z direction, it will rotate along the Z axis so it's face is facing the camera's centre position.
At the moment, I have the sections of ropes defined, so I can have a nice curved rope, but now I'm trying to construct the matrix that will rotate it along it's direction vector.
I already have a matrix for rendering billboard sprites based on this billboarding technique:
Constructing a Billboard Matrix
And at the moment I've been trying to retool it so that Right, Up, Forward vector match the rope segment's direction vector.
My rope is made up of multiple sections, each section is a rectangle made up of two triangles, as I said above, I can get the position and sections perfect, it's the rotating to face the camera that's causing me a lot of problems.
This is in OpenGL ES2 and written in C.
I have studied Doom 3's beam rendering code in Model_beam.cpp, the method used there is to calculate the offset based on normals rather than using matrices, so I have created a similar technique in my C code and it sort of works, at least it, works as much as I need it to right now.
So for those who are also trying to figure this one out, use the cross-product of the mid-point of the rope against the camera position, normalise that and then multiply it to how wide you want the rope to be, then when constructing the vertices, offset each vertex in either + or - direction of the resulting vector.
Further help would be great though as this is not perfect!
Thank you
Check out this related stackoverflow post on billboards in OpenGL It cites a lighthouse3d tutorial that is a pretty good read. Here are the salient points of the technique:
void billboardCylindricalBegin(
float camX, float camY, float camZ,
float objPosX, float objPosY, float objPosZ) {
float lookAt[3],objToCamProj[3],upAux[3];
float modelview[16],angleCosine;
glPushMatrix();
// objToCamProj is the vector in world coordinates from the
// local origin to the camera projected in the XZ plane
objToCamProj[0] = camX - objPosX ;
objToCamProj[1] = 0;
objToCamProj[2] = camZ - objPosZ ;
// This is the original lookAt vector for the object
// in world coordinates
lookAt[0] = 0;
lookAt[1] = 0;
lookAt[2] = 1;
// normalize both vectors to get the cosine directly afterwards
mathsNormalize(objToCamProj);
// easy fix to determine wether the angle is negative or positive
// for positive angles upAux will be a vector pointing in the
// positive y direction, otherwise upAux will point downwards
// effectively reversing the rotation.
mathsCrossProduct(upAux,lookAt,objToCamProj);
// compute the angle
angleCosine = mathsInnerProduct(lookAt,objToCamProj);
// perform the rotation. The if statement is used for stability reasons
// if the lookAt and objToCamProj vectors are too close together then
// |angleCosine| could be bigger than 1 due to lack of precision
if ((angleCosine < 0.99990) && (angleCosine > -0.9999))
glRotatef(acos(angleCosine)*180/3.14,upAux[0], upAux[1], upAux[2]);
}

OpenGL saving objects for later drawing

I'm making a program using openGL with transparent objects in it, so obviously I have to paint those last. Unfortunately I was unaware of this requirement when beginning this project and now it would be a real pain to reorder it painting those at last.
I'm drawing objects by calling my drawing functions after translating and rotating the scene. There can be multiple translations and rotations before an actual drawing (e.g. first I draw the ground, then translate, then call the drawing of the house, which repeatedly translates and rotates, then calls the drawing of the walls and so on).
So my idea was saving the current modelview matrices in a list instead of painting the transparent objects when I normally would, then when I'm done with the opaque stuff, I iterate through my list and load each matrix and paint each object (a window, to be precise).
I do this for saving a matrix:
GLdouble * modelMatrix = (GLdouble *)malloc(16 * sizeof(GLdouble));
glGetDoublev(GL_MODELVIEW, modelMatrix);
addWindow(modelMatrix); // save it for later painting
And this is the "transparent stuff management" part:
/***************************************************************************
*** TRANSPARENT STUFF MANAGEMENT ******************************************
**************************************************************************/
typedef struct wndLst {
GLdouble * modelMatrix;
struct wndLst * next;
} windowList;
windowList * windows = NULL;
windowList * pWindow;
void addWindow(GLdouble * windowModelMatrix) {
pWindow = (windowList *)malloc(sizeof(windowList));
pWindow->modelMatrix = windowModelMatrix;
pWindow->next = windows;
windows = pWindow;
}
void clearWindows() {
while(windows != NULL) {
pWindow = windows->next;
free(windows->modelMatrix);
free(windows);
windows = pWindow;
}
}
void paintWindows() {
glPushMatrix(); // I've tried putting this and the pop inside the loop, but it didn't help either
pWindow = windows;
while(pWindow != NULL) {
glLoadMatrixd(pWindow->modelMatrix);
Size s;
s.height = 69;
s.width = 49;
s.length = 0.1;
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glDepthMask(GL_FALSE);
glColor4f(COLOR_GLASS, windowAlpha);
drawCuboid(s);
glDepthMask(GL_TRUE);
glDisable(GL_BLEND);
pWindow = pWindow->next;
}
glPopMatrix();
}
/* INTERFACE
* paint all the components, that are not yet painted,
* then clean up.
*/
void flushComponents() {
paintWindows();
clearWindows();
}
/**************************************************************************/
I call flushComponents(); at the very end of my drawings.
The problem is, that the windows don't get in their place, instead I get weird-shaped blue objects randomly appearing and disappearing in my scene.
Am I doing something wrong? Or such matrix manipulations cannot even be used like this? Then what other method could I use for doing this?
Here is the full code if you need it: farm.zip Matrix-saving is at components.c line 1548, management is at line 142. It might not work on Windows without some minor hacking with the includes, which should probably be done in global.h.
Edit: I can only use C code and the glut library to write this program.
Edit 2: The problem is glGetDoublev not returning anything for some reason, it leaves the modelMatrix array intact. Though I still have no idea what causes this, I could make a workaround using bernie's idea.
OpenGL is not a math library. You should not use it for doing matrix calculations. In fact that part has been completely removed from OpenGL-3. Instead you should rely on a specialized matrix math library. That allows you to calculate the matrices for each object with ease, without jumping through the hoops of OpenGL's glGet… API (which was never meat for this kind of abuse). For a good replacement look at GLM: http://glm.g-truc.net/
Try adding glMatrixMode(GL_MODELVIEW) before your paintWindows() method. Perhaps you are not modifying the correct matrix.
The basic idea of your method is fine and is very similar to what I used for transparent objects. I would however advise for performance reasons against reading back the matrices from OpenGL. Instead, you can keep a CPU version of the current modelview matrix and just copy that to your window array.
As for your comment about push and pop matrix, you can safely put it outside the loop like you did.
edit
Strictly speaking, your method of rendering transparent objects misses one step: before rendering the list of windows, you should sort them back to front. This allows for overlapping windows to have the correct final color. In general, for 2 windows and a blending function:
blend( blend(scene_color, window0_color, window0_alpha), window1_color, window1_alpha )
!=
blend( blend(scene_color, window1_color, window1_alpha), window0_color, window0_alpha )
However, if all windows consist of the exact same uniform color (e.g. plain texture or no texture) and alpha value, the above equation is true (window0_color==window1_color and window1_alpha==window0_alpha) so you don't need to sort your windows. Also, if it's not possible to have overlapping windows, don't worry about sorting.
edit #2
Now you found something interesting with the erroneous matrix readback. Try it out with the following instead (you certainly don't need double precision):
GLfloat* modelMatrix = (GLfloat*)malloc(16 * sizeof(GLfloat)); // glass
glGetFloatv(GL_MODELVIEW, modelMatrix);
addWindow(modelMatrix); // save it for later painting
If that still doesn't work, you could directly store references to your houses in your transparent object list. During the transparent rendering pass, re-render each house, but only issue actual OpenGL draw calls for the transparent parts. In your code, putWallStdWith would take another boolean argument specifying whether to render the transparent or the opaque geometry. This way, your succession of OpenGL matrix manipulation calls would be redone for the transparent parts instead of read using glGetxxx(GL_MODEL_VIEW).
The correct way to do it however is to do matrix computation on the CPU and simply load complete matrices in OpenGL. That allows you to reuse matrices, control the operation precision, see the actual matrices easily, etc.

OpenGL tex-mapping: Whole object is a single texel's color

I've been trying to get a HUD texture to display for a simulator for a while now, without success.
First I bind the texture like this:
glGenTextures(1,&hudTexObj);
gHud = getPPM("textures/purplenebula/hud.ppm",&n,&m,&s);
glBindTexture(GL_TEXTURE_2D,hudTexObj);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
//glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGB,n,m,0,GL_RGB,GL_UNSIGNED_INT, gHud);
And then I attempt to map it to a QUAD, which results in the whole quad being a single brown color, and I want it to use all the texels. Here's how I map:
glBindTexture(GL_TEXTURE_2D,hudTexObj);
glBegin(GL_QUADS);
glTexCoord2f(0.0,0.0);
glVertex2f(0,0);
glTexCoord2f(0.0,1.0);
glVertex2f(0,m);
glTexCoord2f(1.0,1.0);
glVertex2f(n,m);
glTexCoord2f(1.0,0.0);
glVertex2f(n,0);
glEnd();
The weird thing is that I've been able to get the exact above code to display the texture in a program of its own, yet when I put it into my main program it fails. Could it have to do with the texture matrix? I'm dumbfounded at this point.
Stupidly, I had enabled automatic tex coord generation far away in another part of the code. So if you see one texel's color covering a whole image, that is the likely cause.

Resources