SDL2 CreateTextureFromSurface slow down - c

im doing a graphic interface in SDL2 but if i create the renderer with the flags SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC i get a notable slowdown in comparation with the flag SDL_RENDERER_SOFTWARE what i think shouldn't be possible.
I can't use SDL_RENDERER_SOFTWARE because i need enable VSYNC for avoid the tearing and i need double buffer for that.
Actually i realize that the bottleneck is with the function SDL_CreateTextureFromSurface().
Like my code is pretty big i'll try to explain it instead of past everything here:
Initialize SDL and create a SDL_Surface named screen_surface with SDL_CreateRGBSurface with the same size than my window where ill blit any other surface.
I draw a big square in the middle of that surface with SDL_FillRect and draw a rack inside that square using two times SDL_FillRect for draw two squares, one 2 pixels more big than the next one and like that simulate a empty square (i know i can do the same with SDL_RenderDrawRect but i think is more optimal draw in a surface instead of the Render) for every cell of the rack until i have 4096 cells;
now using SDL_TTF i write info in each cell for that i use TTF_RenderUTF8_Blended for get a surface for each cell and i use SDL_BlitSurface for 'fusion' this surfaces with the screen_surface
And finally i want to go through the big square illuminating the cells that are being cheked for that i use SDL_FillRect for draw a little square that travel throught the rack.
Finally i use the SDL_CreateTextureFromSurface for transform screen_surface in screen_texture followed for SDL_RenderCopy and SDL_RenderPresent
This five steps are inside of the main while with the event management and following the recomendations in the SDL_API i do SDL_RenderClear each loop for redraw everything another time.
Said all this how i said at the begining i realise that the bottleneck is step 5 independent from the another steps because if i take the steps 2 and 3 and i do them before the while leaving inside the while only the creation of the rack illumination on a black window (cause im not drawing anything) i get the same slowdown. Only if i manage to draw things without use textures the velocity increase notably.
There are my questions:
Why could this happening? Teorically use double buffering shouldn't be faster than use Software Renderer?
There is any form to simulate vsync in Software Renderer?
Can i Render a Surface without build a Texture?
PD: I have read a bunch of post around the internet and im gonna answer some typical questions: i reutilize the screen_surface, i can't reutilize the surface that TTF returns, im creating and destroying the texture each loop (cause i think i can not reutilize it).
I let here my code
int main(int ac, char **av)
{
t_data data;
init_data(&data) /* initialize SDL */
ft_ini_font(data); /* Initialize TTF */
ft_ini_interface(data);
main_loop(&data);
ft_quit_graphics(data); /* Close SDL and TTF */
free(data);
return (0);
}
void main_loop(t_data *data)
{
while (data->running)
{
events(data);
SDL_BlitSurface(data->rack_surface, NULL, data->screen_surface, &(SDL_Rect){data->rack_x, data->rack_y, data->rack_w, data->rack_h}); /* Insert the rack in the screen_surface */
ft_write_info(data);
ft_illum_celd(data);
set_back_to_front(data);
}
}
void ft_ini_interface(t_data *data)
{
data->screen_surface = SDL_CreateRGBSurface(0, data->w, data->h, 32, RED_MASK, GREEN_MASK, BLUE_MASK, ALPHA_MASK)
...
/* stuff for calculate rack dims */
...
data->rack_surface = generate_rack_surface(data);
}
void generate_rack_surface(t_data *data)
{
int i;
int j;
int k;
data->rack_surface = SDL_CreateRGBSurface(0, data->rack_w, data->rack_h, 32, RED_MASK, GREEN_MASK, BLUE_MASK, ALPHA_MASK);
SDL_FillRect(Graph->rack, NULL, 0x3D3D33FF);
...
/* ini i, j, k for drawn the rack properly */
...
while (all cells not drawn)
{
if (k && !i)
{
data->celd_y += Graph->data->celd_h - 1;
data->celd_x = 0;
k--;
}
SDL_FillRect(data->rack, &(SDL_Rect){data->celd_x - 1, data->celd_y - 1, data->celd_w + 2, data->celd_h + 2}, 0x1C1C15FF))
SDL_FillRect(data->rack, &(SDL_Rect){data->celd_x, data->celd_y, data->celd_w, data->celd_h}, 0x3D3D33FF)
data->celd_x += data->celd_w - 1;
i--;
}
}
void ft_write_info(t_data *data)
{
SDL_Color color;
char *info;
while (all info not written)
{
color = take_color(); /*take the color of the info (only 4 ifs) */
info = take_info(data); /*take info from a source using malloc*/
surf_byte = TTF_RenderUTF8_Blended(data->font, info, color);
...
/*stuf for take the correct possition in the rack */
...
SDL_BlitSurface(surf_byte, NULL, Graph->screen.screen, &(SDL_Rect){data->info_x, data->info_y, data->celd.w, data->celd.h});
SDL_FreeSurface(surf_byte);
free(info);
}
void ft_illum_celd(t_data *data)
{
int color;
SDL_Rect illum;
illum = next_illum(data) /* return a SDL_Rect with the position of the info being read */
SDL_FillRect(data->screen_surface, &pc, color);
}
void set_back_to_front(t_data *data)
{
SDL_Texture *texture;
texture = SDL_CreateTextureFromSurface(data->Renderer, data->screen_surface);
SDL_RenderCopy(data->Renderer, texture, NULL, NULL);
SDL_DestroyTexture(texture);
SDL_RenderPresent(data->Renderer);
SDL_RenderClear(data->Renderer);
}

Related

Reuse texture SDL

I'm making my first SDL2 game, I have a texture where I draw my game but after each rendering the texture is blanked, I need to have my original texture unmodified.
I have easily made this with surface but it was too slow.
I draw random artefacts on this texture that disappears with the time, I use SDL_RenderFill to shade the texture.
Anyone know how to do this ?
EDIT: Here's the code of the texture rendering
int gv_render(void) // This is called every 10ms
{
gv_lock;
int nexttimeout;
// Clear the screen
SDL_SetRenderTarget(renderer,NULL);
SDL_SetRenderDrawColor(renderer,0,0,0,255);
SDL_SetRenderDrawBlendMode(renderer,SDL_BLENDMODE_NONE);
SDL_RenderClear(renderer);
// Render view specific stuff
SDL_SetRenderTarget(renderer,gv_screen); // gv_screen is my screen texture
switch (player_view) { // I have multiple views
case pvsound:nexttimeout=wave_render();break; // <- THE 2ND FUNCTION \/
};
SDL_RenderPresent(renderer);
// Final screen rendering
SDL_SetRenderTarget(renderer,NULL);
SDL_RenderCopy(renderer,gv_screen,NULL,NULL);
gv_unlock;
return nexttimeout;
};
int wave_render(void) // I (will) have multiple view modes
{
game_wave *currwave = firstwave; // First wave is the first element of a linked list
game_wave *prevwave = firstwave;
map_block* block;
map_block* oldblock;
gv_lock;
// Load the old texture
SDL_RenderCopy(renderer,gv_screen,NULL,NULL);
SDL_SetRenderDrawBlendMode(renderer,SDL_BLENDMODE_BLEND);
// Dark the screen
SDL_SetRenderDrawColor(renderer,0,0,0,8);
SDL_RenderFillRect(renderer,NULL);
SDL_SetRenderDrawBlendMode(renderer,SDL_BLENDMODE_NONE);
// Now I travel my list
while (currwave) {
// Apply block info
/* skipped non graphics */
// Draw the wave point
uint8_t light; // Wave have a strong that decrease with time
if (currwave->strong>=1.0)
light = 255; // Over 1 it don't decrease
else light = currwave->strong*255; // Now they aren't fully white
SDL_SetRenderDrawColor(renderer,light,light,light,255);
SDL_RenderDrawPoint(renderer, currwave->xpos,currwave->ypos);
// Switch to next wave
prevwave = currwave; // There also code is the skipped part
currwave = currwave->next;
};
SDL_RenderPresent(renderer);
gv_unlock;
return 10;
};
```
This seem to be complicated, as say #david C. Rankin the SDL renderer is faster than surface but more or less write-only (SDL_RenderReadPixels and SDL_UpdateTexture could do the job in non realtime case).
I have changed my method, I use a linked list of pixels coordinates with entry points in a 256 items array.
My source code is now :
struct game_wave_point {
struct game_wave_point *next;
int x;
int y;
};typedef struct game_wave_point game_wave_point;
game_wave_point* graph_waves[256] = {NULL,NULL,...};
wave_render(void)
{
game_wave *currwave = firstwave;
// Perform the darkening
int i;
uint8_t light;
for (i=1;i<=255;i++)
graph_waves[i-1] = graph_waves[i];
graph_waves[255] = NULL;
// Remove unvisible point
game_wave_point* newpoint;
while (graph_waves[0]) {
newpoint = graph_waves[0];
graph_waves[0] = newpoint->next;
free(newpoint);
};
// Wave heartbeat...
while (currwave) {
/* blablabla */
// Add the drawing point
newpoint = malloc(sizeof(game_wave_point));
newpoint->next = graph_waves[light];
newpoint->x = currwave->xpos*pixelsperblock;
newpoint->y = currwave->ypos*pixelsperblock;
if ((newpoint->x<0)|(newpoint->y<0))
free(newpoint);
else graph_waves[light] = newpoint;
/* blablabla */
};
// Now perform the drawing
for (i=1;i<=255;i++) {
newpoint = graph_waves[i];
SDL_SetRenderDrawColor(renderer,i,i,i,255);
SDL_GetRenderDrawColor(renderer,&light,NULL,NULL,NULL);
while (newpoint) {
SDL_RenderDrawPoint(renderer,newpoint->x,newpoint->y);
newpoint = newpoint->next;
};
};
return 10;
};
This work well on my computer (progressive slow appear in a case that I will never reach).
Next optimization maybe performed with Linux mremap(2) and similar, this will allow creating a simple array that work with SDL_RenderDrawPoints without slowness of realloc() on big array.

SDL TTF - Line wrapping & changing height of wrapped lines?

I've been programming a small text adventure game using SDL2 recently and have come across an issue with line wrapping. I am using TTF_RenderText_Blended_Wrapped() to render my strings, and this gives me some nicely wrapped lines. But The line height is an issue, the lines seem squished together, and letters like 'jqg' overlap with letters like 'tli'.
Does anyone know if there is a way to change the line height? TTF_RenderText_Blended_Wrapped() still isn't even in the documentation for SDL_ttf. Should I just write my own text wrapping function?
The font size is 16pt, styling is TTF_STYLE_BOLD, and the font can be found here. The code below should reproduce the error, there is almost no error checking though, use at your own risk. Here is the output of the code:
#include <stdio.h>
#include <SDL2/SDL.h>
#include <SDL2/SDL_ttf.h>
int main(int argc, char *argv[]) {
SDL_Window *gui;
SDL_Surface *screen, *text;
SDL_Event ev;
TTF_Font *font;
int running = 1;
const char *SAMPLETEXT = "This is an example of my problem, for most lines it works fine, albeit it looks a bit tight. But for any letters that \"hang\" below the line, there is a chance of overlapping with the letters below. This isn\'t the end of the world, but I think it makes for rather cluttered text.\n\nNotice the p and k on line 1/2, and the g/t on 2/3 and 3/4.";
// init SDL/TTF
SDL_Init(SDL_INIT_EVERYTHING);
TTF_Init();
// Open and set up font
font = TTF_OpenFont("Anonymous.ttf", 16);
if(font == NULL) {
fprintf(stderr, "Error: font could not be opened.\n");
return 0;
}
TTF_SetFontStyle(font, TTF_STYLE_BOLD);
// Create GUI
gui = SDL_CreateWindow("Example", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED,
640, 480, SDL_WINDOW_SHOWN);
// Grab GUI surface
screen = SDL_GetWindowSurface(gui);
// Clear screen black
SDL_FillRect(screen, NULL, 0);
// Draw some text to screen
SDL_Color color = {0xff, 0xff, 0xff, 0xff};
text = TTF_RenderText_Blended_Wrapped(font, SAMPLETEXT, color, screen->w);
SDL_BlitSurface(text, NULL, screen, NULL);
while(running) { // Main loop
while(SDL_PollEvent(&ev)) {
switch(ev.type){
case SDL_QUIT:
running = 0;
break;
}
}
SDL_UpdateWindowSurface(gui); // Refresh window
SDL_Delay(20); // Delay loop
}
// Destroy resources and quit
TTF_CloseFont(font);
TTF_Quit();
SDL_FreeSurface(text);
SDL_DestroyWindow(gui);
SDL_Quit();
return 0;
}
The easiest solution is to find a font that doesn't have that issue. The FreeMono font has more spacing:
From looking at the source code for TTF_RenderUTF8_Blended_Wrapped, which is called by TTF_RenderText_Blended_Wrapped, there is no configurable way to set the spacing between the lines. See const int lineSpace = 2; on line 1893.
However, even though lineSpace is set to 2, it is not being used when computing the address of each pixel to render. This is effectively setting the line spacing to 0. I reported this as a bug in the SDL_ttf library: https://bugzilla.libsdl.org/show_bug.cgi?id=3679
I was able to fix the issue in SDL_ttf 2.0.14 with the following change:
--- a/SDL_ttf.c Fri Jan 27 17:54:34 2017 -0800
+++ b/SDL_ttf.c Thu Jun 22 16:54:38 2017 -0700
## -1996,7 +1996,7 ##
return(NULL);
}
- rowSize = textbuf->pitch/4 * height;
+ rowSize = textbuf->pitch/4 * (height + lineSpace);
/* Adding bound checking to avoid all kinds of memory corruption errors
that may occur. */
With the above patch applied, your example program shows the correct line spacing with the Anonymous font:

Simple C Program that creates 2 X11 windows

I want to create 2 windows in linux that I'll later draw in from a separate thread. I currently have a non-deterministic bug where the second window that I create sometimes doesn't get created (no errors though).
Here is the code.
static void create_x_window(Display *display, Window *win, int width, int height)
{
int screen_num = DefaultScreen(display);
unsigned long background = WhitePixel(display, screen_num);
unsigned long border = BlackPixel(display, screen_num);
*win = XCreateSimpleWindow(display, DefaultRootWindow(display), /* display, parent */
0,0, /* x, y */
width, height, /* width, height */
2, border, /* border width & colour */
background); /* background colour */
XSelectInput(display, *win, ButtonPressMask|StructureNotifyMask);
XMapWindow(display, *win);
}
int main(void) {
XInitThreads(); // prevent threaded XIO errors
local_display = XOpenDisplay(":0.0");
Window self_win, remote_win;
XEvent self_event, remote_event;
create_x_window(local_display, &remote_win, 640,480);
// this line flushes buffer and blocks so that the window doesn't crash for a reason i dont know yet
XNextEvent(local_display, &remote_event);
create_x_window(local_display, &self_win, 320, 240);
// this line flushes buffer and blocks so that the window doesn't crash for a reason i dont know yet
XNextEvent(local_display, &self_event);
while (1) {
}
return 0;
}
I don't really care for capturing input in the windows, but I found a tutorial that had XSelectInput and XNextEvent (in an event loop) and I was having trouble making this work without either.
It's not a bug, it's a feature. You left out the event loop.
Although you cleverly called XNextEvent twice, the X protocol is asynchronous so the server may still be setting up the actual window while you call XNextEvent, so there is nothing to do.
Tutorial here.

Generating N x N grid to display in center of window using OpenGL/C

I need to create an NxN gameboard that's determined by the user input (ie if they enter 6, it'll be a 6x6 gameboard, etc), and create a tic-tac toe like game. I'm just starting and was trying to build the board, but I can only get it to create a 5 x 5 board in the upper right hand corner and I'd like to make it the full window screen. Here's some of the code so far:
#include <stdio.h> //for text output
#include <stdlib.h> //for atof() function
#include <GL/glut.h> //GL Utility Toolkit
//to hold for size and tokens for gameboard
float grid, tokens;
void init(void);
/*Function to build the board*/
void buildGrid(float size) {
glBegin(GL_LINES);
for(int i = 0; i < size; i++) {
glVertex2f(0.0, i/5.0f);
glVertex2f(1.0, i/5.0f);
glVertex2f(i/5.0f, 0.0);
glVertex2f(i/5.0f, 1.0);
}
glEnd();
}
/*Callback function for display */
void ourDisplay(void) {
glClear(GL_COLOR_BUFFER_BIT);
buildGrid(grid);
glFlush();
}
int main(int argc, char* argv[]) {
/*User arguments*/
if (argc != 3) {
printf("You are missing parts of the argument!");
printf("\n Need game size and how many in a row to win by\n");
exit(1);
}
/*Take arguments and convert to floats*/
grid = atof(argv[1]);
tokens = atof(argv[2]);
/* Settupp OpenGl and Window */
glutInit(&argc, argv);
/* Set up display */
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
glutInitWindowSize(800, 800); // obvious
glutInitWindowPosition(0, 0); // obvious
glutCreateWindow("Tic Tac Oh"); // window title
/* Call the display callback handler */
glutDisplayFunc(ourDisplay);
init();
/* Start the main loop that waits for events to happen and
then to process them */
glutMainLoop();
}
I'm thinking it has to do with the x,y coordinates of glVertex2f, I've tried using different coordinates (negatives) and it would just move the box into a different quarter of the window. I'm also thinking that the coordinates of the window (800 x 800) needs to be manipulated somehow, but I'm just not sure.
You're using the old-school fixed-function pipeline, but you're not setting up your model-view-projection matrixes. This means that your OpenGL window uses "clip space" coordinates, which go from -1 to +1. So, the lower-left corner of your screen is (-1, -1), and the upper right is (+1, +1).
At the bare minimum, you will probably want to call glOrtho() to set your projection matrix, then glTranslatef() and glScalef() to set up your modelview matrix. (Or you can just continue to supply coordinates in clip space, but there's no real advantage to doing that, so you might as well choose your own coordinate system to make things easier for you.)
This will be covered in any OpenGL 1.x tutorial, perhaps you just haven't read that far yet. Look for phrases "matrix stack", "projection matrix", "modelview matrix".

Using SDL_BlitScaled to created scaled copies of surfaces

So I'm working on some SDL2 Wrapper stuff, and I'm trying to use SDL_BlitScaled to copy the data in a src surface into a destination surface which I've already created, like so
SDL_Surface *loaded = IMG_Load("test.png");
SDL_SetSurfaceBlendMode(loaded, SDL_BLENDMODE_NONE);
SDL_Surface *out = SDL_CreateRGBSurface(0, 100, 100, loaded->format->BitsPerPixel,
loaded->format->Rmask, loaded->format->Gmask, loaded->format->Bmask, loaded->format->Amask);
SDL_BlitScaled(loaded, NULL, out, NULL);
SDL_Texture *tex = SDL_CreateTextureFromSurface(ren, out);
SDL_Rect rec = {10, 10, 110, 110};
SDL_RenderCopy(ren, tex, NULL, &rec);
Don't worry about my renderer or window etc. I've isolated the problem to somewhere in this code. The image does not appear on the screen, however it does if I create a texture from the loaded surface. Thoughts? I imagine I'm misusing either CreateRGBSurface, or BlitScaled (I did see another question about this, however the solution was unclear).
For me I had to do:
SDL_SetSurfaceBlendMode(loaded , SDL_BLENDMODE_NONE);
SDL_SetSurfaceBlendMode(out, SDL_BLENDMODE_NONE);
For it to work, otherwise some strange blending happens.
The docs page for this function says:
To copy a surface to another surface (or texture) without blending with the existing data, the blendmode of the SOURCE surface should be
set to 'SDL_BLENDMODE_NONE'.
So setting loaded is probably enough.
Edit: In the end I came up with this:
struct FreeSurface_Functor
{
void operator() (SDL_Surface* pSurface) const
{
if (pSurface)
{
SDL_FreeSurface(pSurface);
}
}
};
typedef std::unique_ptr<SDL_Surface, FreeSurface_Functor> SDL_SurfacePtr;
class SDLHelpers
{
public:
SDLHelpers() = delete;
static SDL_SurfacePtr ScaledCopy(SDL_Surface* src, SDL_Rect* dstSize)
{
SDL_SurfacePtr scaledCopy(SDL_CreateRGBSurface(0,
dstSize->w, dstSize->h,
src->format->BitsPerPixel,
src->format->Rmask, src->format->Gmask, src->format->Bmask, src->format->Amask));
// Get the old mode
SDL_BlendMode oldBlendMode;
SDL_GetSurfaceBlendMode(src, &oldBlendMode);
// Set the new mode so copying the source won't change the source
SDL_SetSurfaceBlendMode(src, SDL_BLENDMODE_NONE);
// Do the copy
if (SDL_BlitScaled(src, NULL, scaledCopy.get(), dstSize) != 0)
{
scaledCopy.reset();
}
// Restore the original blending mode
SDL_SetSurfaceBlendMode(src, oldBlendMode);
return scaledCopy;
}
};

Resources