End of draw loop causing massive delay - loops

I have multiple 'modes' which draw different things on the screen e.g. movement, inventory. When I switch between these, I am using a transition that fades black then fade back into the other mode.
The transition uses millis() and the time it started to determine how transparent to make a black rectangle that covers the screen.
However there is a massive delay at the end of the draw function which is disrupting this. The delay happens right after the mode that is being drawn behind the rectangle changes. It also only happens on the first transition made, not on subsequent ones.
What is causing the delay and how can I prevent it?
My code is very long and I have not been able to find what might be causing this. So I'm not sure exactly what code to post.
I've tried using processing's inbuilt debugger but the debugger doesn't pause millis() which makes it very unhelpful.
I've also tried adding several println() statements as timestamps but these only narrow the problem down to being between the end of draw() and the start of then next loop of draw()
// Transitions
if (p.transMode != ""){
if (timer == 0){
timer = millis();
} // If timer hasn't been started
ctime = millis() - timer; // current time
if (ctime < (transT*2/5)){
fill (0, 0, 0, map(ctime, 0, transT*2/5, 0, 255));
// Get darker
}
else if (ctime < transT/2){
p.transMode != p.mode){fill (0, 0, 0, 255);
p.mode = p.transMode;
// change mode in transition and move if able, fill black
}
else if (ctime < transT){
fill (0, 0, 0, map(ctime - transT/2, 0, transT/2, 255, 0));
// Get lighter
}
else { // If transition finished
timer = 0; // reset timer
p.transMode = ""; // turn off transition
fill(0, 0, 0, 0);
}
// draw transition rectangle
beginShape();
vertex(-100, -100, 200);
vertex(1100, -100, 200);
vertex(1100, 600, 200);
vertex(-100, 600, 200);
endShape();
}
Instead of fading back into the new mode it stays completely black for longer then suddenly cuts to the other mode (the timer doesn't let it enter the if statement that is making it fade back).

Related

Why does my window lag when I run multiple instances of it?

I created a Win32 window app that moves around the screen occasionally, sort of like a pet. As it moves, it switches between 2 bitmaps to show 'animation' of it moving. The implementation involves multiple WM_TIMER messages: One timer moves the window, another changes the bitmap and windows region (to only display the bitmap without the transparent parts) as it is moving, and another changes the direction the window moves.
The window runs perfectly smoothly by itself, but when I open multiple instances, the animations and movements start to lag - it is not so noticeable at 2 windows, but 3 instances and above causes every single window to start lagging very noticably. The movement and animations are choppy and even freeze occasionaly.
I have tried removing portions of the code to pinpoint the cause of the issue, and apparently this only occurs when a section of the following code is put in (I have marked it out with comments):
HBITMAP hBitMap = NULL;
BITMAP infoBitMap;
hBitMap = LoadBitmap(GetModuleHandle(NULL), IDB_BITMAP2);
if (hBitMap == NULL)
{
MessageBoxA(NULL, "COULD NOT LOAD PET BITMAP", "ERROR", MB_OK);
}
HRGN BaseRgn = CreateRectRgn(0, 0, 0, 0);
HDC winDC = GetDC(hwnd);
HDC hMem = CreateCompatibleDC(winDC);
GetObject(hBitMap, sizeof(infoBitMap), &infoBitMap);
HDC hMemOld = SelectObject(hMem, hBitMap);
COLORREF transparentCol = RGB(255, 255, 255);
for (int y = 0; y < infoBitMap.bmHeight; y++) //<<<< THIS SECTION ONWARDS
{
int x, xLeft, xRight;
x = 0;
do {
xLeft = xRight = 0;
while (x < infoBitMap.bmWidth && (GetPixel(hMem, x, y) == transparentCol))
{
x++;
}
xLeft = x;
while (x < infoBitMap.bmWidth && (GetPixel(hMem, x, y) != transparentCol))
{
x++;
}
xRight = x;
HRGN TempRgn;
TempRgn = CreateRectRgn(xLeft, y, xRight, y + 1);
int ret = CombineRgn(BaseRgn, BaseRgn, TempRgn, RGN_OR);
if (ret == ERROR)
{
MessageBoxA(NULL, "COMBINE REGION FAILED", "ERROR", MB_OK);
}
DeleteObject(TempRgn);
}
while (x < infoBitMap.bmWidth);
}
SetWindowRgn(hwnd, BaseRgn, TRUE); //<<<<---- UNTIL HERE
BitBlt(winDC, 0, 0, infoBitMap.bmWidth, infoBitMap.bmHeight, hMem, 0, 0, SRCCOPY);
SelectObject(hMem, hMemOld);
DeleteDC(hMem);
ReleaseDC(hwnd, winDC);
The commented section is the code I use to eliminate the transparent parts of the bitmap from being displayed in the window client region. It is run every time the app changes bitmap to display animation.
The app works perfectly fine if I remove that code, so I suspect this is causing the issue. Does someone know why this section of code causes lag, and ONLY with multiple instances open? Is there a way to deal with this lag?
You're iterating over each pixel in each update (correct me if I'm wrong.) which is a fairly slow process (relatively.)
A better option would be to use something like this: https://stackoverflow.com/a/3970218/19192256 to create a mask color and simply use masking to remove the transparent pixels.
creating multiple regions and concatenating them is a very slow and resource/cpu-intensive operation. Instead, use ExtCreateRegion() to create a single region from an array of rectangles.
Alternatively, forget using a region at all. Simply display your bitmap on the window normally and fill in the desired areas of the window with a unique color that you can make transparent using SetLayeredWindowAttributes(), as described in #Substitute's answer.

SDL, how to fix key input lag [duplicate]

I am making a game where a nozzle of a tank rotates around when space is pressed to shoot enemies. However, right in the beginning when the space is pressed, it seems to stop for a few milliseconds and then continues without any problems. How can I make it so that the rotations is smooth and consistent as soon as the space is pressed, right from the start? Here is a minimal reproducible example:
#include "SDL.h"
#include <iostream>
class Nozzle
{
public:
void draw(SDL_Renderer* renderer, int cx, int cy, int l)
{
float x = ((float)cos(angle) * l) + cx;
float y = ((float)sin(angle) * l) + cy;
SDL_RenderDrawLine(renderer, cx, cy, (int)x, (int)y);
}
void plusAngle(float a)
{
angle += a;
}
private:
float angle = 0.0f;
};
int main(int argc, char* argv[])
{
SDL_Window* window = SDL_CreateWindow("RGame", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 1200, 600, false);
SDL_Renderer* renderer = SDL_CreateRenderer(window, -1, 0);
SDL_Event event;
Nozzle nozzle;
bool running = true;
while (running)
{
while (SDL_PollEvent(&event))
{
if (event.type == SDL_QUIT)
running = false;
if (event.type == SDL_KEYDOWN)
{
if (event.key.keysym.sym == SDLK_SPACE)
nozzle.plusAngle(0.1f);
}
}
SDL_SetRenderDrawColor(renderer, 255, 255, 255, 255);
SDL_RenderClear(renderer);
SDL_SetRenderDrawColor(renderer, 0, 0, 0, 0);
nozzle.draw(renderer, 600, 300, 70);
SDL_RenderPresent(renderer);
}
SDL_DestroyWindow(window);
SDL_Quit();
return 1;
}
if (event.type == SDL_KEYDOWN)
{
if (event.key.keysym.sym == SDLK_SPACE)
nozzle.plusAngle(0.1f);
}
This is not how you do controls in a game.
If you open a text editor and hold a key, you'll see one letter being typed, then, after a delay, a steady stream of the same repeated letter. And SDL does the same thing, it gives you fake repeated "key down" events in this manner. This is normally used for editing text, not for game controls. (Those repeated events are marked by event.key.repeat == true).
What you should do is to create something like bool space_key_down, set it to true when you get SDL_KEYDOWN for the space key, and to false when you get SDL_KEYUP for the same key. Then, outside of the event loop, if the variable is set, you rotate your nozzle.
Or you can use SDL_GetKeyboardState. SDL does this thing automatically for every key, and you can access the list of flags it maintains using this function.
Also, while we're at it, you normally don't want to use keycodes (.sym == SDLK_SPACE) for game controls. Prefer scancodes (.scancode == SDL_SCANCODE_SPACE). The difference only becomes apparent on exotic layouts (e.g. AZERTY): keycodes represent the letters printed on the keycaps, while scancodes represent physical key locations. For example, on AZERTY you want to use ZQSD instead of WASD. If you use scancodes, it will happen automatically (SDL_SCANCODE_W will represent Z, and so on).
scaling rotation with frame-rate isn't really what I am looking for. Its a different problem
You need to solve this problem too. If you don't want the rotation speed to depend on FPS (bad thing), you must either mutliply the rotation angle by the frame length (it works, but it's easy to make mistakes this way), or make sure your game logic runs the fixed amount of times per second regardless of the FPS (I prefer this solution). See Fix Your Timestep!.

sdl2 flickering unless I don't use createrenderer

I have a somewhat basic rendering loop which blits a scaled image to the screen as fast as I can process the event loop. I created a minimal example that recreates the flickering on pastebin here.
If I don't use "SDL_CreateRenderer", and instead leave renderer to NULL, it works. I just can't clear the screen first. If I set the renderer, I get this crazy fast flickering.
// if I comment this out in my init_sdl(), no flickering...
renderer = SDL_CreateRenderer(window, -1, 0);
assert(renderer != NULL);
my draw function happens at the end of the event loop:
void draw()
{
SDL_SetRenderDrawColor(renderer, 255, 0, 128, 255);
SDL_RenderClear(renderer);
SDL_Rect dstrect = {
.x = 50,
.y = 50,
.h = 100,
.w = 100,
};
SDL_BlitScaled(img, NULL, screen, &dstrect);
SDL_UpdateWindowSurface(window);
SDL_RenderPresent(renderer);
}
I've seen this potential duplicate question, but the problem was that they had their RenderPresent in the wrong place. You can see I'm calling SDL_RenderPresent at the end of all drawing operations, which was my takeaway from that. It is still happening.
I'm using msys2 (mingw_x64), gcc, windows 10, SDL2.

Scaled Layers in GDI

Original question
Basically, I have two bitmaps, and I want to put one behind the other, scaled down to half its size.
Both are centered, and are of the same resolution.
The catch is that I want to put more than one bitmap on this back layer eventually, and want the scaling to apply to the whole layer and not just the individual bitmap.
My thought is I would use a memory DC for the back layer, capture its contents into a bitmap of its own and use StretchBlt to place it in my main dc
The code I have right now doesn't work, and I can't make sense of it, let alone find anyone who had done this before for direction.
My variables at the moment are as follows
hBitmap - back bitmap
hFiller - front bitmap
hdc - main DC
ldc - back DC(made with CreateCompatibleDC(hdc);)
resh - width of hdc
resv - height of hdc
note that my viewport origin is set to the center
--this part above is solved, with the one major issue being that it does not keep the back layers...
Revised Question
Here's my code. Everything works as intended except for the fact that the layers do not properly stack. They seem to erase what is underneath or fill it with black.
For the record this is a direct copy of my code. I explain sections of it but there is nothing missing between the code blocks.
case WM_TIMER:
{
switch(wParam)
{
case FRAME:
If any position or rotation values have changed, the following section of code clears the screen and prepares it to be rewritten
if(reload == TRUE){
tdc = CreateCompatibleDC(hdc);
oldFiller = SelectObject(tdc,hFiller);
GetObject(hFiller, sizeof(filler), &filler);
StretchBlt(hdc, 0-(resh/2), 0-(resv/2), resh, resv, tdc, 0, 0, 1, 1, SRCCOPY);
SelectObject(tdc,oldFiller);
DeleteDC(tdc);
if(turn == TRUE){
xForm.eM11 = (FLOAT) cos(r/angleratio);
xForm.eM12 = (FLOAT) sin(r/angleratio);
xForm.eM21 = (FLOAT) -sin(r/angleratio);
xForm.eM22 = (FLOAT) cos(r/angleratio);
xForm.eDx = (FLOAT) 0.0;
xForm.eDy = (FLOAT) 0.0;
SetWorldTransform(hdc, &xForm);
}
This is the part that only partially works. At a distance of 80 my scale value will make my bitmap 1 pixel by 1 pixel, so I consider this my "draw distance"
It scales properly, but the layers do not stack, as I mentioned above
for(int i=80;i>1;i--){
tdc = CreateCompatibleDC(hdc);
tbm = CreateCompatibleBitmap(hdc, resh, resv);
SelectObject(tdc, tbm);
BitBlt(tdc, 0-(resh/2), 0-(resv/2), resh, resv,hdc,0,0,SRCCOPY);
//drawing code goes in here
ldc = CreateCompatibleDC(hdc);
oldBitmap = SelectObject(ldc,hBitmap);
StretchBlt(tdc,(int)(angleratio*atan((double)128/(double)i)),0,(int)(angleratio*atan((double)128/(double)i)),(int)(angleratio*atan((double)128/(double)i)),ldc,0,0,128,128,SRCCOPY);
SelectObject(ldc,oldBitmap);
DeleteDC(ldc);
BitBlt(hdc, 0, 0, resh, resv, tdc, 0, 0, SRCCOPY);
DeleteObject(tbm);
DeleteDC(tdc);
}
reload = FALSE;
}
This section below just checks for keyboard input which changes the position or rotation of the "camera"
This part works fine and can be ignored
if(GetKeyboardState(NULL)==TRUE){
reload = TRUE;
if(GetKeyState(VK_UP)<0){
fb--;
}
if(GetKeyState(VK_DOWN)<0){
fb++;
}
if(GetKeyState(VK_RIGHT)<0){
lr--;
}
if(GetKeyState(VK_LEFT)<0){
lr++;
}
if(GetKeyState(0x57)<0){
p++;
}
if(GetKeyState(0x53)<0){
p--;
}
}
break;
}
}
break;

Efficient reflections in Clutter/COGL?

I'm working on a program that uses Clutter (1.10) and COGL to render elements to the display.
I've created a set of ClutterTextures that I am rendering video to, and I'd like the video textures to have reflections.
The "standard" way to implement this seems to be a callback every time the texture is painted, with code similar to:
static void texture_paint_cb (ClutterActor *actor ) {
ClutterGeometry geom;
CoglHandle cmaterial;
CoglHandle ctexture;
gfloat squish = 1.5;
cogl_push_matrix ();
clutter_actor_get_allocation_geometry (actor, &geom);
guint8 opacity = clutter_actor_get_paint_opacity (actor);
opacity /= 2;
CoglTextureVertex vertices[] =
{
{ geom.width, geom.height, 0, 1, 1 },
{ 0, geom.height, 0, 0, 1 },
{ 0, geom.height*squish, 0, 0, 0 },
{ geom.width, geom.height*squish, 0, 1, 0 }
};
cogl_color_set_from_4ub (&vertices[0].color, opacity, opacity, opacity, opacity);
cogl_color_set_from_4ub (&vertices[1].color, opacity, opacity, opacity, opacity);
cogl_color_set_from_4ub (&vertices[2].color, 0, 0, 0, 0);
cogl_color_set_from_4ub (&vertices[3].color, 0, 0, 0, 0);
cmaterial = clutter_texture_get_cogl_material (CLUTTER_TEXTURE (actor));
ctexture = clutter_texture_get_cogl_texture (CLUTTER_TEXTURE (actor));
cogl_material_set_layer (cmaterial, 0, ctexture);
cogl_set_source(cmaterial);
cogl_set_source_texture(ctexture);
cogl_polygon (vertices, G_N_ELEMENTS (vertices), TRUE);
cogl_pop_matrix ();
}
This is then hooked to the paint signal on the ClutterTexture. There's a similar bit of code here that does something similar. (Google cache, since the page has been down today)
The problem that I'm having is that the reflection effect is causing a performance hit - 5~7 fps is being lost when I enable it. Part of the problem is likely the low-power hardware I'm using (a Raspberry Pi).
I've managed to do something similar to what this code does, by setting up a clone of the texture and making it somewhat transparent. This causes no performance hit whatsoever. However, unlike the paint callback method, the reflection has hard edges and doesn't fade out.
I'd like to get a better looking reflection effect without the performance hit. I'm wondering if there's some way to get a similar effect that doesn't require so much work per paint... There are a bunch of other Clutter and COGL methods that manipulate materials, shaders, and so forth, but I have little to no OpenGL expertise so I don't have any idea if I could get something along those lines to do what I want, or even how to find examples of something similar I could work off of.
Is it possible to get a better looking, high performance reflection effect via Clutter/COGL?

Resources