Hello I am trying to create a function Gravity that make all the black pixels of a Quadtree to "fall" if they can (if there is a white pixel under).
for instance Gravity(Qt1) gives Qt2 :
The Quadtree class:
typedef struct Qtree{
bool allblack;
struct Qtree * son[4];
}Qtree;
I have made some auxiliariy functions:
create a black pixel
Qtree* create_Black(){
Qtree* I -> malloc(sizeof(Qtree));
I->son[0] = I->son[1] = I->son[2] = I->son[3] = NULL
return I;
create a white pixel
Qtree* create_White(){
return NULL;
create a Qtree based on 4 sons
Qtree* create_Comp(Qtree* i0, Qtree* i1, Qtree* i2, Qtree i3){
Qtree* I = malloc(sizeof(Qtree));
I->son[0] = i0;
...
return I;
}
I already have done some code that exchanges black pixel but I can obtain the result wanted :(
If some guys could help me, I can't find any ressources on internet
Related
I have checked out several solutions in here and other pages for calculating vertex normals. The common solution which seems to work best for my own implementation which renders a 3D terrain is to calculate the face normals, which isn't a problem. And then go over each face and add it's normal to the vertices which make it up and then normalize those when done. It seems to work for the most part, but I have some strange graphical problems, mainly where the light transitions from light to dark, you can tell where the faces are. In the following image you can see this near the lower right side, at the top of this hill.
So I am wondering what is causing this strange pattern. It has something to do with how I am calculating the normals, but I am just not seeing where the issue is. Any help would be appreciated.
The code to calculate the normals is...
// Calclulate surface normals
vec3 v1, v2, v3, vec1, vec2;
for(GLuint i = 0; i < terrain->NumFaces; i++) {
v1 = terrain->Vertices[terrain->Faces[i].vert_indices[0]];
v2 = terrain->Vertices[terrain->Faces[i].vert_indices[1]];
vec1 = vector(&v2, &v1);
v3 = terrain->Vertices[terrain->Faces[i].vert_indices[2]];
vec2 = vector(&v3, &v1);
terrain->Faces[i].surface_normal = crossProduct(&vec1, &vec2);
normalize(&terrain->Faces[i].surface_normal);
}
// Calculate vertex normals...
// Add all the surface normals to their attached vertex normals
for(GLuint currentFace = 0; currentFace < terrain->NumFaces; currentFace++) {
vec3 *f = &terrain->Faces[currentFace].surface_normal;
for(GLuint faceVertex = 0; faceVertex < 3; faceVertex++) {
vec3 *n = &terrain->Normals[terrain->Faces[currentFace].vert_indices[faceVertex]];
*n = vec3Add(n, f); // adds vector f to n
}
}
// Go over all vertices and normalize them
for(GLuint currentVertice = 0; currentVertice < terrain->NumVertices; currentVertice++)
normalize(&terrain->Normals[currentVertice]);
Other utility functions I use in the above code are...
// Returns the vector between two vertices
vec3 vector(const vec3 *vp1, const vec3 *vp2)
{
vec3 ret;
ret.x = vp1->x - vp2->x;
ret.y = vp1->y - vp2->y;
ret.z = vp1->z - vp2->z;
return ret;
}
// Returns the normal of two vectors
vec3 crossProduct(const vec3 *v1, const vec3 *v2)
{
vec3 normal;
normal.x = v1->y * v2->z - v1->z * v2->y;
normal.y = v1->z * v2->x - v1->x * v2->z;
normal.z = v1->x * v2->y - v1->y * v2->x;
return normal;
}
// Returns the length of a vector
float vec3Length(vec3 *v1) {
return sqrt(v1->x * v1->x + v1->y * v1->y + v1->z * v1->z);
}
// Normalizes a vector
void normalize(vec3 *v1)
{
float len = vec3Length(v1);
if(len < EPSILON) return;
float inv = 1.0f / len;
v1->x *= inv;
v1->y *= inv;
v1->z *= inv;
}
// Adds vector v2 to v1
vec3 vec3Add(vec3 *v1, vec3 *v2)
{
vec3 v;
v.x = v1->x + v2->x;
v.y = v1->y + v2->y;
v.z = v1->z + v2->z;
return v;
}
One problem with using the average of the face normals to compute the vertex normals is that the computed normals can be biased. For example, imagine that there is a ridge that runs north/south. One vertex on the peak of the ridge has three polygons on the east side, and two on the west. The vertex normal will be angled to the east. This can cause darker lighting at that point when the illumination is coming from the west.
A possible improvement would be to apply a weight to each face's normal, proportional to the angle that corner of the face has at that vertex, but this will not get rid of all of the bias.
After experimenting with different solutions, I discovered that my own normal generation in this post actually works extremely well, it's virtually instant and wasn't the problem. The problem seemed to be in using a large texture for the terrain. I changed the texture I used for the terrain to use a tiled texture which wouldn't get stretched so much and the graphic issue seems to have went away. It was a relief that the normal generation I posted works well as other solutions were horribly slow. This is what I ended up with and as you can see, there are no graphical problems. Plus it looks better with more detail. I wanted to post what I found out in case anyone else sees the same problem.
I'm making my first SDL2 game, I have a texture where I draw my game but after each rendering the texture is blanked, I need to have my original texture unmodified.
I have easily made this with surface but it was too slow.
I draw random artefacts on this texture that disappears with the time, I use SDL_RenderFill to shade the texture.
Anyone know how to do this ?
EDIT: Here's the code of the texture rendering
int gv_render(void) // This is called every 10ms
{
gv_lock;
int nexttimeout;
// Clear the screen
SDL_SetRenderTarget(renderer,NULL);
SDL_SetRenderDrawColor(renderer,0,0,0,255);
SDL_SetRenderDrawBlendMode(renderer,SDL_BLENDMODE_NONE);
SDL_RenderClear(renderer);
// Render view specific stuff
SDL_SetRenderTarget(renderer,gv_screen); // gv_screen is my screen texture
switch (player_view) { // I have multiple views
case pvsound:nexttimeout=wave_render();break; // <- THE 2ND FUNCTION \/
};
SDL_RenderPresent(renderer);
// Final screen rendering
SDL_SetRenderTarget(renderer,NULL);
SDL_RenderCopy(renderer,gv_screen,NULL,NULL);
gv_unlock;
return nexttimeout;
};
int wave_render(void) // I (will) have multiple view modes
{
game_wave *currwave = firstwave; // First wave is the first element of a linked list
game_wave *prevwave = firstwave;
map_block* block;
map_block* oldblock;
gv_lock;
// Load the old texture
SDL_RenderCopy(renderer,gv_screen,NULL,NULL);
SDL_SetRenderDrawBlendMode(renderer,SDL_BLENDMODE_BLEND);
// Dark the screen
SDL_SetRenderDrawColor(renderer,0,0,0,8);
SDL_RenderFillRect(renderer,NULL);
SDL_SetRenderDrawBlendMode(renderer,SDL_BLENDMODE_NONE);
// Now I travel my list
while (currwave) {
// Apply block info
/* skipped non graphics */
// Draw the wave point
uint8_t light; // Wave have a strong that decrease with time
if (currwave->strong>=1.0)
light = 255; // Over 1 it don't decrease
else light = currwave->strong*255; // Now they aren't fully white
SDL_SetRenderDrawColor(renderer,light,light,light,255);
SDL_RenderDrawPoint(renderer, currwave->xpos,currwave->ypos);
// Switch to next wave
prevwave = currwave; // There also code is the skipped part
currwave = currwave->next;
};
SDL_RenderPresent(renderer);
gv_unlock;
return 10;
};
```
This seem to be complicated, as say #david C. Rankin the SDL renderer is faster than surface but more or less write-only (SDL_RenderReadPixels and SDL_UpdateTexture could do the job in non realtime case).
I have changed my method, I use a linked list of pixels coordinates with entry points in a 256 items array.
My source code is now :
struct game_wave_point {
struct game_wave_point *next;
int x;
int y;
};typedef struct game_wave_point game_wave_point;
game_wave_point* graph_waves[256] = {NULL,NULL,...};
wave_render(void)
{
game_wave *currwave = firstwave;
// Perform the darkening
int i;
uint8_t light;
for (i=1;i<=255;i++)
graph_waves[i-1] = graph_waves[i];
graph_waves[255] = NULL;
// Remove unvisible point
game_wave_point* newpoint;
while (graph_waves[0]) {
newpoint = graph_waves[0];
graph_waves[0] = newpoint->next;
free(newpoint);
};
// Wave heartbeat...
while (currwave) {
/* blablabla */
// Add the drawing point
newpoint = malloc(sizeof(game_wave_point));
newpoint->next = graph_waves[light];
newpoint->x = currwave->xpos*pixelsperblock;
newpoint->y = currwave->ypos*pixelsperblock;
if ((newpoint->x<0)|(newpoint->y<0))
free(newpoint);
else graph_waves[light] = newpoint;
/* blablabla */
};
// Now perform the drawing
for (i=1;i<=255;i++) {
newpoint = graph_waves[i];
SDL_SetRenderDrawColor(renderer,i,i,i,255);
SDL_GetRenderDrawColor(renderer,&light,NULL,NULL,NULL);
while (newpoint) {
SDL_RenderDrawPoint(renderer,newpoint->x,newpoint->y);
newpoint = newpoint->next;
};
};
return 10;
};
This work well on my computer (progressive slow appear in a case that I will never reach).
Next optimization maybe performed with Linux mremap(2) and similar, this will allow creating a simple array that work with SDL_RenderDrawPoints without slowness of realloc() on big array.
im doing a graphic interface in SDL2 but if i create the renderer with the flags SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC i get a notable slowdown in comparation with the flag SDL_RENDERER_SOFTWARE what i think shouldn't be possible.
I can't use SDL_RENDERER_SOFTWARE because i need enable VSYNC for avoid the tearing and i need double buffer for that.
Actually i realize that the bottleneck is with the function SDL_CreateTextureFromSurface().
Like my code is pretty big i'll try to explain it instead of past everything here:
Initialize SDL and create a SDL_Surface named screen_surface with SDL_CreateRGBSurface with the same size than my window where ill blit any other surface.
I draw a big square in the middle of that surface with SDL_FillRect and draw a rack inside that square using two times SDL_FillRect for draw two squares, one 2 pixels more big than the next one and like that simulate a empty square (i know i can do the same with SDL_RenderDrawRect but i think is more optimal draw in a surface instead of the Render) for every cell of the rack until i have 4096 cells;
now using SDL_TTF i write info in each cell for that i use TTF_RenderUTF8_Blended for get a surface for each cell and i use SDL_BlitSurface for 'fusion' this surfaces with the screen_surface
And finally i want to go through the big square illuminating the cells that are being cheked for that i use SDL_FillRect for draw a little square that travel throught the rack.
Finally i use the SDL_CreateTextureFromSurface for transform screen_surface in screen_texture followed for SDL_RenderCopy and SDL_RenderPresent
This five steps are inside of the main while with the event management and following the recomendations in the SDL_API i do SDL_RenderClear each loop for redraw everything another time.
Said all this how i said at the begining i realise that the bottleneck is step 5 independent from the another steps because if i take the steps 2 and 3 and i do them before the while leaving inside the while only the creation of the rack illumination on a black window (cause im not drawing anything) i get the same slowdown. Only if i manage to draw things without use textures the velocity increase notably.
There are my questions:
Why could this happening? Teorically use double buffering shouldn't be faster than use Software Renderer?
There is any form to simulate vsync in Software Renderer?
Can i Render a Surface without build a Texture?
PD: I have read a bunch of post around the internet and im gonna answer some typical questions: i reutilize the screen_surface, i can't reutilize the surface that TTF returns, im creating and destroying the texture each loop (cause i think i can not reutilize it).
I let here my code
int main(int ac, char **av)
{
t_data data;
init_data(&data) /* initialize SDL */
ft_ini_font(data); /* Initialize TTF */
ft_ini_interface(data);
main_loop(&data);
ft_quit_graphics(data); /* Close SDL and TTF */
free(data);
return (0);
}
void main_loop(t_data *data)
{
while (data->running)
{
events(data);
SDL_BlitSurface(data->rack_surface, NULL, data->screen_surface, &(SDL_Rect){data->rack_x, data->rack_y, data->rack_w, data->rack_h}); /* Insert the rack in the screen_surface */
ft_write_info(data);
ft_illum_celd(data);
set_back_to_front(data);
}
}
void ft_ini_interface(t_data *data)
{
data->screen_surface = SDL_CreateRGBSurface(0, data->w, data->h, 32, RED_MASK, GREEN_MASK, BLUE_MASK, ALPHA_MASK)
...
/* stuff for calculate rack dims */
...
data->rack_surface = generate_rack_surface(data);
}
void generate_rack_surface(t_data *data)
{
int i;
int j;
int k;
data->rack_surface = SDL_CreateRGBSurface(0, data->rack_w, data->rack_h, 32, RED_MASK, GREEN_MASK, BLUE_MASK, ALPHA_MASK);
SDL_FillRect(Graph->rack, NULL, 0x3D3D33FF);
...
/* ini i, j, k for drawn the rack properly */
...
while (all cells not drawn)
{
if (k && !i)
{
data->celd_y += Graph->data->celd_h - 1;
data->celd_x = 0;
k--;
}
SDL_FillRect(data->rack, &(SDL_Rect){data->celd_x - 1, data->celd_y - 1, data->celd_w + 2, data->celd_h + 2}, 0x1C1C15FF))
SDL_FillRect(data->rack, &(SDL_Rect){data->celd_x, data->celd_y, data->celd_w, data->celd_h}, 0x3D3D33FF)
data->celd_x += data->celd_w - 1;
i--;
}
}
void ft_write_info(t_data *data)
{
SDL_Color color;
char *info;
while (all info not written)
{
color = take_color(); /*take the color of the info (only 4 ifs) */
info = take_info(data); /*take info from a source using malloc*/
surf_byte = TTF_RenderUTF8_Blended(data->font, info, color);
...
/*stuf for take the correct possition in the rack */
...
SDL_BlitSurface(surf_byte, NULL, Graph->screen.screen, &(SDL_Rect){data->info_x, data->info_y, data->celd.w, data->celd.h});
SDL_FreeSurface(surf_byte);
free(info);
}
void ft_illum_celd(t_data *data)
{
int color;
SDL_Rect illum;
illum = next_illum(data) /* return a SDL_Rect with the position of the info being read */
SDL_FillRect(data->screen_surface, &pc, color);
}
void set_back_to_front(t_data *data)
{
SDL_Texture *texture;
texture = SDL_CreateTextureFromSurface(data->Renderer, data->screen_surface);
SDL_RenderCopy(data->Renderer, texture, NULL, NULL);
SDL_DestroyTexture(texture);
SDL_RenderPresent(data->Renderer);
SDL_RenderClear(data->Renderer);
}
SOLVED: I'm not really sure how though... thanks for all your help guys.
I tried glDisable(GL_CULL_FACE); but the mesh is still not visible.
Basically I'm trying to draw a mesh (made from verts, normals, and texture coords) in OpenGL, using a display list. The mesh is on .obj format (exported from 3ds max 2013)
The problem is that the mesh is not visible.
To draw the display list I'm just using glCallLists (list, 1);
I have verified that I can draw things to the screen by drawing a point in the center of the screen and that works fine.
Could it be possible that the camera is positioned inside the mesh? If so is there an OpenGL state that I could enable to allow me to see the inside of a set of verts?
I know that the data I have is all valid, verified by printing each vert, normal and texture coord to a file before adding it to the display list, it looks valid.
I have dont no glTranslatef or anything like that, my projection matrix is setup like this:
glMatrixMode (GL_PROJECTION);
glLoadIdentity ();
gluPerspective (45.0, (float)1024/(float)768, -9999, 9999);
glMatrixMode (GL_MODELVIEW);
glLoadIdentity ();
If you want to have a look at the .obj file, here it is: http://pastebin.com/PpG3vG5e
This is how I create the display list:
list = glGenLists (1);
glNewList (list, GL_COMPILE);
glBegin (GL_TRIANGLES);
for (i = 0; i < data.face_count; i++)
{
// first vert
normal[0][0] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[0]]->e[0];
normal[0][1] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[0]]->e[1];
normal[0][2] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[0]]->e[2];
tex[0][0] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[0]]->e[0];
tex[0][1] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[0]]->e[1];
tex[0][2] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[0]]->e[2];
vert[0][0] = (float)data.vertex_list[data.face_list[i]->vertex_index[0]]->e[0];
vert[0][1] = (float)data.vertex_list[data.face_list[i]->vertex_index[0]]->e[1];
vert[0][2] = (float)data.vertex_list[data.face_list[i]->vertex_index[0]]->e[2];
// second vert
normal[1][0] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[1]]->e[0];
normal[1][1] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[1]]->e[1];
normal[1][2] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[1]]->e[2];
tex[1][0] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[1]]->e[0];
tex[1][1] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[1]]->e[1];
tex[1][2] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[1]]->e[2];
vert[1][0] = (float)data.vertex_list[data.face_list[i]->vertex_index[1]]->e[0];
vert[1][1] = (float)data.vertex_list[data.face_list[i]->vertex_index[1]]->e[1];
vert[1][2] = (float)data.vertex_list[data.face_list[i]->vertex_index[1]]->e[2];
// third vert
normal[2][0] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[2]]->e[0];
normal[2][1] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[2]]->e[1];
normal[2][2] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[2]]->e[2];
tex[2][0] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[2]]->e[0];
tex[2][1] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[2]]->e[1];
tex[2][2] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[2]]->e[2];
vert[2][0] = (float)data.vertex_list[data.face_list[i]->vertex_index[2]]->e[0];
vert[2][1] = (float)data.vertex_list[data.face_list[i]->vertex_index[2]]->e[1];
vert[2][2] = (float)data.vertex_list[data.face_list[i]->vertex_index[2]]->e[2];
for (j = 0; j < 3; j++)
{
glNormal3f (normal[j][0], normal[j][1], normal[j][2]);
glTexCoord3f (tex[j][0], tex[j][1], tex[j][2]);
glVertex3f (vert[j][0], vert[j][1], vert[j][2]);
}
}
glEnd ();
glEndList ();
EDIT:
I've tried things like:
glTranslatef (0, 0, 5);
glCallList (mesh);
glTranslatef (0, 0, 0);
but they don't work either :(
EDIT:
#datenwolf
Here is the code I use to draw it:
Draw_Begin ();
Mdl_Draw (list, 0.0f, 0.0f, 0.0f);
Draw_End ();
This
gluPerspective (45.0, (float)1024/(float)768, -9999, 9999);
is wrong. In a perspective projection both the near and the far plane distance must be of the same sign, i.e. both positive or both negative. Also the absolute value of the near plane must be smaller than the absolute value of the far plane. And the near plane distance must be nonzero. In mathematical notation:
sgn(near) = sgn(far) ^ 0 < |near| < |far|
Usually both near and far are chosen positive. Also as a rule of thumb the near clipping plane should be chosen as fer away as possible. The far plane can be placed at infinity (exploting some of the properties of homogenous matrices), but usually is placed as close as possible to max out depth buffer resolution.
I have a program that generates a heightmap and then displays it as a mesh with OpenGL. When I try to add lighting, it ends up with weird square shapes covering the mesh. They are more noticeable in some areas than others, but are always there.
I was using a quad mesh, but nothing changed after switching to a triangle mesh. I've used at least three different methods to calculate the vertex normals, all with the same effect. I was doing the lighting manually with shaders, but nothing changes when using the builtin
OpenGL lighting system.
My latest normal-generating code (faces is an array of indices into verts, the vertex array):
int i;
for (i = 0; i < NINDEX; i += 3) {
vec v[3];
v[0] = verts[faces[i + 0]];
v[1] = verts[faces[i + 1]];
v[2] = verts[faces[i + 2]];
vec v1 = vec_sub(v[1], v[0]);
vec v2 = vec_sub(v[2], v[0]);
vec n = vec_norm(vec_cross(v2, v1));
norms[faces[i + 0]] = vec_add(norms[faces[i + 0]], n);
norms[faces[i + 1]] = vec_add(norms[faces[i + 1]], n);
norms[faces[i + 2]] = vec_add(norms[faces[i + 2]], n);
}
for (i = 0; i < NVERTS; i++) {
norms[i] = vec_norm(norms[i]);
}
Although that isn't the only code I've used, so I doubt that it is the cause of the problem.
I draw the mesh with:
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, verts);
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, 0, norms);
glDrawElements(GL_TRIANGLES, NINDEX, GL_UNSIGNED_SHORT, faces);
And I'm not currently using any shaders.
What could be causing this?
EDIT: A more comprehensive set of screenshots:
Wireframe
Flat shading, OpenGL lighting
Smooth shading, OpenGL lighting
Lighting done in shader
For the last one, the shader code is
Vertex:
varying vec3 lightvec, normal;
void main() {
vec3 lightpos = vec3(0, 0, 100);
vec3 v = vec3(gl_ModelViewMatrix * gl_Vertex);
normal = gl_NormalMatrix * gl_Normal;
lightvec = normalize(lightpos - v);
gl_Position = ftransform();
}
Fragment:
varying vec3 lightvec, normal;
void main(void) {
float l = dot(lightvec, normal);
gl_FragColor = vec4(l, l, l, 1);
}
You need to either normalize the normal in the fragment shader, like so:
varying vec3 lightvec, normal;
void main(void) {
vec3 normalNormed = normalize(normal);
float l = dot(lightvec, normalNormed);
gl_FragColor = vec4(l, l, l, 1);
}
This can be expensive though. What will also work in this case, with directional lights, is to use vertex lighting. So calculate the light value in the vertex shader
varying float lightItensity;
void main() {
vec3 lightpos = vec3(0, 0, 100);
vec3 v = vec3(gl_ModelViewMatrix * gl_Vertex);
normal = gl_NormalMatrix * gl_Normal;
lightvec = normalize(lightpos - v);
lightItensity = dot(normal, lightvec);
gl_Position = ftransform();
}
and use it in the fragment shader,
varying float light;
void main(void) {
float l = light;
gl_FragColor = vec4(l, l, l, 1);
}
I hope this fixes it, let me know if it doesn't.
EDIT: Heres a small diagram that explains what is most likely happening
EDIT2:
If that doesn't help, add more triangles. Interpolate the values of your heightmap and add some vertices in between.
Alternatively, try changing your tesselation scheme. For example a mesh of equilateral triangles like so could make the artifacts less prominent.
You'll have to do some interpolation on your heightmap.
Otherwise I have no idea.. Good luck!
I don't have a definitive answer for the non-shader versions, but I wanted to add that if you're doing per pixel lighting in your fragment shader, you should probably be normalizing the normal and lightvec inside the fragment shader.
If you don't do this they not be unit length (a linear interpolation between two normalized vectors is not necessarily normalized). This could explain some of the artifacts you see in the shader version, as the magnitude of the dot product would vary as a function of the distance from the vertices, which kind of looks like what you're seeing.
EDIT: Another thought, are you doing any non-uniform scaling (different x,y,z) of the mesh when rendering the non-shader version? If you scale it, then you need to either modify the normals by the inverse scale factor, or set glEnable(GL_NORMALIZE). See here for more:
http://www.lighthouse3d.com/tutorials/glsl-tutorial/normalization-issues/