So I'm working on some SDL2 Wrapper stuff, and I'm trying to use SDL_BlitScaled to copy the data in a src surface into a destination surface which I've already created, like so
SDL_Surface *loaded = IMG_Load("test.png");
SDL_SetSurfaceBlendMode(loaded, SDL_BLENDMODE_NONE);
SDL_Surface *out = SDL_CreateRGBSurface(0, 100, 100, loaded->format->BitsPerPixel,
loaded->format->Rmask, loaded->format->Gmask, loaded->format->Bmask, loaded->format->Amask);
SDL_BlitScaled(loaded, NULL, out, NULL);
SDL_Texture *tex = SDL_CreateTextureFromSurface(ren, out);
SDL_Rect rec = {10, 10, 110, 110};
SDL_RenderCopy(ren, tex, NULL, &rec);
Don't worry about my renderer or window etc. I've isolated the problem to somewhere in this code. The image does not appear on the screen, however it does if I create a texture from the loaded surface. Thoughts? I imagine I'm misusing either CreateRGBSurface, or BlitScaled (I did see another question about this, however the solution was unclear).
For me I had to do:
SDL_SetSurfaceBlendMode(loaded , SDL_BLENDMODE_NONE);
SDL_SetSurfaceBlendMode(out, SDL_BLENDMODE_NONE);
For it to work, otherwise some strange blending happens.
The docs page for this function says:
To copy a surface to another surface (or texture) without blending with the existing data, the blendmode of the SOURCE surface should be
set to 'SDL_BLENDMODE_NONE'.
So setting loaded is probably enough.
Edit: In the end I came up with this:
struct FreeSurface_Functor
{
void operator() (SDL_Surface* pSurface) const
{
if (pSurface)
{
SDL_FreeSurface(pSurface);
}
}
};
typedef std::unique_ptr<SDL_Surface, FreeSurface_Functor> SDL_SurfacePtr;
class SDLHelpers
{
public:
SDLHelpers() = delete;
static SDL_SurfacePtr ScaledCopy(SDL_Surface* src, SDL_Rect* dstSize)
{
SDL_SurfacePtr scaledCopy(SDL_CreateRGBSurface(0,
dstSize->w, dstSize->h,
src->format->BitsPerPixel,
src->format->Rmask, src->format->Gmask, src->format->Bmask, src->format->Amask));
// Get the old mode
SDL_BlendMode oldBlendMode;
SDL_GetSurfaceBlendMode(src, &oldBlendMode);
// Set the new mode so copying the source won't change the source
SDL_SetSurfaceBlendMode(src, SDL_BLENDMODE_NONE);
// Do the copy
if (SDL_BlitScaled(src, NULL, scaledCopy.get(), dstSize) != 0)
{
scaledCopy.reset();
}
// Restore the original blending mode
SDL_SetSurfaceBlendMode(src, oldBlendMode);
return scaledCopy;
}
};
Related
I want to rotate a sprite in C and SDL2, with a set center of rotation, and without scaling or anti-aliasing.
My game resolution is 320x240, and the display is scaled up when I set the game to full screen, because I'm using SDL_RenderSetLogicalSize(renderer, 320, 240).
Using SDL2's SDL_RenderCopyEx() (or SDL_RenderCopyExF()) to rotate a SDL_Texture.
As shown in this example ( https://imgur.com/UGNDfEY ) when the window is set to full screen, the texture is scaled up and at much higher resolution. Is would like the final 320x240 rendering to be scaled up, not the individual textures.
Using SDL_gfx's rotozoomSurface() was a possible alternative.
However, as shown in this example ( https://imgur.com/czPEUhv ), while this method give the intended low-resolution and aliased look, it has no center of rotation, and renders the transparency color as half-transparent black.
Is there a function that does what I'm looking for? Are there some tricks to get around that?
What I would do is to render what you want to in a SDL_Texture, and then print this texture into the renderer, using something like :
// Set 'your_texture' as target
SDL_SetRenderTarget(your_renderer, your_texture);
// We are now printing the rotated image on the texture
SDL_RenderCopyEx(your_renderer, // we still use the renderer; it will be automatically printed into the texture 'your_texture'
your_image,
&srcrect,
&dstrect,
angle,
¢er,
SDL_FLIP_NONE); // unless you want to flip vertically / horizontally
// Set the renderer as target and print the previous texture
SDL_SetRenderTarget(your_renderer, NULL);
SDL_RenderClear(your_renderer);
SDL_RenderCopy (your_renderer, your_texture, NULL, NULL); // here the scale is automatically done
SDL_RenderPresent(your_renderer);
It works, but I don't know if it is very efficient.
Don't forget to define your_texture with a SDL_TEXTUREACCESS_TARGET access.
Hope this helps,
Durza42
Thanks to #Durza42, here's the solution to my problem:
#define kScreenWidth 320
#define kScreenHeight 240
SDL_Window* g_window = NULL;
SDL_Texture* g_texture = NULL;
SDL_Renderer* g_renderer = NULL;
SDL_Texture* g_sprite = NULL;
double g_sprite_angle = 0.0;
SDL_FRect g_sprite_frect = {
.x = 50.0f,
.y = 50.0f,
.w = 32.0f,
.h = 32.0f,
};
void
sdl_load(void)
{
SDL_Init(SDL_INIT_VIDEO);
g_window = SDL_CreateWindow(NULL, SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, kScreenWidth, kScreenHeight, 0);
g_renderer = SDL_CreateRenderer(g_window, -1, SDL_RENDERER_PRESENTVSYNC);
g_texture = SDL_CreateTexture(g_renderer, SDL_PIXELFORMAT_RGBA8888, SDL_TEXTUREACCESS_TARGET, kScreenWidth, kScreenHeight);
SDL_RenderSetLogicalSize(g_renderer, kScreenWidth, kScreenHeight);
}
void
sprite_load(void)
{
g_sprite = IMG_LoadTexture(g_renderer, "./sprite.png");
}
void
draw(void)
{
SDL_SetRenderTarget(g_renderer, g_texture);
SDL_RenderClear(g_renderer);
SDL_RenderCopyExF(g_renderer, g_sprite, NULL, &g_sprite_frect, g_sprite_angle, NULL, SDL_FLIP_NONE);
SDL_SetRenderTarget(g_renderer, NULL);
SDL_RenderClear(g_renderer);
SDL_RenderCopy(g_renderer, g_texture, NULL, NULL);
SDL_RenderPresent(g_renderer);
}
I want to simulate a tv screen in my game. A 'no signal' image is displayed all along. It will be replaced by a scene of a man shooting another one, that's all. so I wrote this, which load my image every time :
void Display_nosignal(SDL_Texture* Scene, SDL_Renderer* Rendu)
{
SDL_Rect dest = {416, 0, 416, 416};
SDL_Rect dest2 = {SCENE};
SDL_SetRenderTarget(Rendu, Scene);
SDL_Rect src = {(1200-500)/2,0,500,545};
/*Scene = IMG_LoadTexture(Rendu, "mages/No_Signal.jpg");
if (Scene==NULL){printf("Erreur no signal : %s\n", SDL_GetError());}*/
if (Scene == NULL)
{
Scene = IMG_LoadTexture(Rendu, "mages/No_Signal.jpg");
if (Scene == NULL){printf("Erreur no signal : %s\n", SDL_GetError());}
}
SDL_RenderCopy(Rendu, Scene, NULL, &src);
SDL_SetRenderTarget(Rendu, NULL);
//SDL_DestroyTexture(Scene);
//500, 545
}
and it causes memory leaks. I've tried to destroy the texture in the loop etc., but nothing changes. so, can you advice me some ways to load the image at the very beginning , keep it , and display it only when needed.
I agree with the commenters about dedicated texture loader being a correct solution, but if you only want this behavior for one particular texture it may be an overkill. In that case you can write a separate function which loads this particular texture and make sure it is only called once.
Alternatively, you can use static variables. If a variable declared in a function is marked as static it will retain its value across calls to that function. You can find a simple example here (it's a tutorial-grade source but it shows basic usage) or here (SO source).
By modifying your code ever so slightly you should be able to make sure that the texture is loaded only once. By marking a pointer to it as static you ensure that its value (so address of the loaded texture) is not lost bewteen calls to the function. Afterwards the pointer will live in memory until the program terminates. Thanks to this, we do not have to free the texture's memory (unless you explicitly want to free it at some point, but then the texture manager is probably a better idea). A memory leak will not occur, since we are never going to lose the reference to the texture.
void Display_nosignal(SDL_Texture* Scene, SDL_Renderer* Rendu)
{
static SDL_Texture* Scene_cache = NULL; // Introduce a variable which remembers the texture across function calls
SDL_Rect dest = {416, 0, 416, 416};
SDL_Rect dest2 = {SCENE};
SDL_SetRenderTarget(Rendu, Scene);
SDL_Rect src = {(1200-500)/2,0,500,545};
if (Scene_cache == NULL) // First time we call the function, Scene_cache will be NULL, but in next calls it will point to a loaded texture
{
Scene_cache = IMG_LoadTexture(Rendu, "mages/No_Signal.jpg"); // If Scene_cache is NULL we load the texture
if (Scene_cache == NULL){printf("Erreur no signal : %s\n", SDL_GetError());}
}
Scene = Scene_cache; // Set Scene to point to the loaded texture
SDL_RenderCopy(Rendu, Scene, NULL, &src);
SDL_SetRenderTarget(Rendu, NULL);
}
If you care about performance and memory usage etc you should read about consequences of static variables, for instance their impact on cache and how they work internally. This might be considered a "dirty hack" but it might be just enough for a small project that does not need bigger solutions.
im doing a graphic interface in SDL2 but if i create the renderer with the flags SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC i get a notable slowdown in comparation with the flag SDL_RENDERER_SOFTWARE what i think shouldn't be possible.
I can't use SDL_RENDERER_SOFTWARE because i need enable VSYNC for avoid the tearing and i need double buffer for that.
Actually i realize that the bottleneck is with the function SDL_CreateTextureFromSurface().
Like my code is pretty big i'll try to explain it instead of past everything here:
Initialize SDL and create a SDL_Surface named screen_surface with SDL_CreateRGBSurface with the same size than my window where ill blit any other surface.
I draw a big square in the middle of that surface with SDL_FillRect and draw a rack inside that square using two times SDL_FillRect for draw two squares, one 2 pixels more big than the next one and like that simulate a empty square (i know i can do the same with SDL_RenderDrawRect but i think is more optimal draw in a surface instead of the Render) for every cell of the rack until i have 4096 cells;
now using SDL_TTF i write info in each cell for that i use TTF_RenderUTF8_Blended for get a surface for each cell and i use SDL_BlitSurface for 'fusion' this surfaces with the screen_surface
And finally i want to go through the big square illuminating the cells that are being cheked for that i use SDL_FillRect for draw a little square that travel throught the rack.
Finally i use the SDL_CreateTextureFromSurface for transform screen_surface in screen_texture followed for SDL_RenderCopy and SDL_RenderPresent
This five steps are inside of the main while with the event management and following the recomendations in the SDL_API i do SDL_RenderClear each loop for redraw everything another time.
Said all this how i said at the begining i realise that the bottleneck is step 5 independent from the another steps because if i take the steps 2 and 3 and i do them before the while leaving inside the while only the creation of the rack illumination on a black window (cause im not drawing anything) i get the same slowdown. Only if i manage to draw things without use textures the velocity increase notably.
There are my questions:
Why could this happening? Teorically use double buffering shouldn't be faster than use Software Renderer?
There is any form to simulate vsync in Software Renderer?
Can i Render a Surface without build a Texture?
PD: I have read a bunch of post around the internet and im gonna answer some typical questions: i reutilize the screen_surface, i can't reutilize the surface that TTF returns, im creating and destroying the texture each loop (cause i think i can not reutilize it).
I let here my code
int main(int ac, char **av)
{
t_data data;
init_data(&data) /* initialize SDL */
ft_ini_font(data); /* Initialize TTF */
ft_ini_interface(data);
main_loop(&data);
ft_quit_graphics(data); /* Close SDL and TTF */
free(data);
return (0);
}
void main_loop(t_data *data)
{
while (data->running)
{
events(data);
SDL_BlitSurface(data->rack_surface, NULL, data->screen_surface, &(SDL_Rect){data->rack_x, data->rack_y, data->rack_w, data->rack_h}); /* Insert the rack in the screen_surface */
ft_write_info(data);
ft_illum_celd(data);
set_back_to_front(data);
}
}
void ft_ini_interface(t_data *data)
{
data->screen_surface = SDL_CreateRGBSurface(0, data->w, data->h, 32, RED_MASK, GREEN_MASK, BLUE_MASK, ALPHA_MASK)
...
/* stuff for calculate rack dims */
...
data->rack_surface = generate_rack_surface(data);
}
void generate_rack_surface(t_data *data)
{
int i;
int j;
int k;
data->rack_surface = SDL_CreateRGBSurface(0, data->rack_w, data->rack_h, 32, RED_MASK, GREEN_MASK, BLUE_MASK, ALPHA_MASK);
SDL_FillRect(Graph->rack, NULL, 0x3D3D33FF);
...
/* ini i, j, k for drawn the rack properly */
...
while (all cells not drawn)
{
if (k && !i)
{
data->celd_y += Graph->data->celd_h - 1;
data->celd_x = 0;
k--;
}
SDL_FillRect(data->rack, &(SDL_Rect){data->celd_x - 1, data->celd_y - 1, data->celd_w + 2, data->celd_h + 2}, 0x1C1C15FF))
SDL_FillRect(data->rack, &(SDL_Rect){data->celd_x, data->celd_y, data->celd_w, data->celd_h}, 0x3D3D33FF)
data->celd_x += data->celd_w - 1;
i--;
}
}
void ft_write_info(t_data *data)
{
SDL_Color color;
char *info;
while (all info not written)
{
color = take_color(); /*take the color of the info (only 4 ifs) */
info = take_info(data); /*take info from a source using malloc*/
surf_byte = TTF_RenderUTF8_Blended(data->font, info, color);
...
/*stuf for take the correct possition in the rack */
...
SDL_BlitSurface(surf_byte, NULL, Graph->screen.screen, &(SDL_Rect){data->info_x, data->info_y, data->celd.w, data->celd.h});
SDL_FreeSurface(surf_byte);
free(info);
}
void ft_illum_celd(t_data *data)
{
int color;
SDL_Rect illum;
illum = next_illum(data) /* return a SDL_Rect with the position of the info being read */
SDL_FillRect(data->screen_surface, &pc, color);
}
void set_back_to_front(t_data *data)
{
SDL_Texture *texture;
texture = SDL_CreateTextureFromSurface(data->Renderer, data->screen_surface);
SDL_RenderCopy(data->Renderer, texture, NULL, NULL);
SDL_DestroyTexture(texture);
SDL_RenderPresent(data->Renderer);
SDL_RenderClear(data->Renderer);
}
I am attempting to use GDI+ in my C application to take a screenshot and save it as JPEG. I am using GDI+ to convert the BMP to JPEG but apparently when calling the GdiplusStartup function, the return code is 2(invalid parameter) instead of 0:
int main()
{
GdiplusStartupInput gdiplusStartupInput;
ULONG_PTR gdiplusToken;
//if(GdiplusStartup(&gdiplusToken, &gdiplusStartupInput, NULL) != 0)
// printf("GDI NOT WORKING\n");
printf("%d",GdiplusStartup(&gdiplusToken, &gdiplusStartupInput, NULL));
HDC hdc = GetDC(NULL); // get the desktop device context
HDC hDest = CreateCompatibleDC(hdc); // create a device context to use yourself
// get the height and width of the screen
int height = GetSystemMetrics(SM_CYVIRTUALSCREEN);
int width = GetSystemMetrics(SM_CXVIRTUALSCREEN);
// create a bitmap
HBITMAP hbDesktop = CreateCompatibleBitmap( hdc, width, height);
// use the previously created device context with the bitmap
SelectObject(hDest, hbDesktop);
// copy from the desktop device context to the bitmap device context
// call this once per 'frame'
BitBlt(hDest, 0,0, width, height, hdc, 0, 0, SRCCOPY);
// after the recording is done, release the desktop context you got..
ReleaseDC(NULL, hdc);
// ..and delete the context you created
DeleteDC(hDest);
SaveJpeg(hbDesktop,"a.jpeg",100);
GdiplusShutdown(gdiplusToken);
return 0;
}
I am trying to figure out why the GdiplusStartup function is not working.
Any thoughts?
Initialize gdiplusStartupInput variable with the following values: GdiplusVersion = 1, DebugEventCallback = NULL, SuppressBackgroundThread = FALSE, SuppressExternalCodecs = FALSE
According to MSDN article GdiplusStartup function http://msdn.microsoft.com/en-us/library/windows/desktop/ms534077%28v=vs.85%29.aspx
GdiplusStartupInput structure has default constructor which initializes the structure with these values. Since you call the function from C, constructor is not working and structure remains uninitialized. Provide your own initialization code to solve the problem.
// As Global
ULONG_PTR gdiplusToken;
// In top of main
GdiplusStartupInput gdiplusStartupInput;
GdiplusStartup(&programInfo.gdiplusToken, &gdiplusStartupInput, NULL);
works for me.
I am making a game for my CS class and the sprites I have found online are too small. How do you 'stretch' a bitmap... make them bigger using SDL? (I need to increase there size 50%, there all the same size.) A snippet of example code would be appreciated.
This question does not specify SDL version, and even though SDL2 was not available when the question was written, an SDL2 answer would add completeness here i believe.
Unlike SDL1.2, scaling is possible in SDL2 using the API method SDL_RenderCopyEx. No additional libs besides the basic SDL2 lib are needed.
int SDL_RenderCopyEx(SDL_Renderer* renderer,
SDL_Texture* texture,
const SDL_Rect* srcrect,
const SDL_Rect* dstrect,
const double angle,
const SDL_Point* center,
const SDL_RendererFlip flip)
By setting the size of dstrect one can scale the texture to an integer number of pixels. It is also possible to rotate and flip the texture at the same time.
Reference: https://wiki.libsdl.org/SDL_RenderCopyEx
Create your textures as usual:
surface = IMG_Load(filePath);
texture = SDL_CreateTextureFromSurface(renderer, surface);
And when it's time to render it, call SDL_RenderCopyEx instead of SDL_RenderCopy
You're going to get a better looking result using software that is designed for this task. A good option is ImageMagick. It can be used from the command line or programatically.
For example, from the command line you just enter:
convert sprite.bmp -resize 150% bigsprite.bmp
If for some strange reason you want to write your own bilinear resize, this guy looks like he knows what he is doing.
have you tried?
SDL_Rect src, dest;
src.x = 0;
src.y = 0;
src.w = image->w;
src.h = image->h;
dest.x = 100;
dest.y = 100;
dest.w = image->w*1.5;
dest.h = image->h*1.5;
SDL_BlitSurface(image, &src, screen, &dest);
Use for stretch the undocumented function of SDL:
extern DECLSPEC int SDLCALL SDL_SoftStretch(SDL_Surface *src, SDL_Rect *srcrect,
SDL_Surface *dst, SDL_Rect *dstrect);
Like:
SDL_SoftStretch(image, &src, screen, &dest);