WPF window slightly moves when UI thread is stuck - wpf

I noticed that when our WPF application’s GUI thread is stuck for a few seconds, the application’s main window moves a few pixels to the top and left, and then, when the GUI thread is freed, the window returns to its location.
I also noticed it happen in Visual Studio when I save a complex file and CodeMaid is busy cleaning it – Visual Studio window moves a few pixels to the top and left, and then, when the job is done, it returns to its previous location.
This doesn’t always happen and is not easily reproducible. It seems not to happen when the window is maximized.
Can anyone say why this happens, and, more important, how to avoid it (other than keeping the GUI thread from getting stuck…)?

Related

Clearing X11 window with desktop background pixels, and putting XImage with transparent pixels on it?

I am trying to make an application which will graphically repeat the mouse pointer, so I can ultimately make a mouse trail program, for Ubuntu 18.04 - and it seems, the way to do it is via X11/Xlib - although, these days I don't even know, as my machine says also wayland:
$ loginctl | while IFS= read line; do echo "$line"; if [[ $line == *"tty"* ]]; then sessnum=$(echo "$line" | awk '{print $1;}'); echo sessnum: $sessnum\; $(loginctl show-session $sessnum -p Type); fi; done
SESSION UID USER SEAT TTY
c1 121 gdm seat0 tty1
sessnum: c1; Type=wayland
2 1000 administrator seat0 tty2
sessnum: 2; Type=x11
2 sessions listed.
Regardless, I managed to put together an unholy assemblage of:
https://keithp.com/blogs/Cursor_tracking/ - which sets up the program for capturing raw mouse events, so the mouse pointer position can be extracted (and a redraw triggered) whenever the mouse pointer position changes
xosd.c (via https://github.com/AndreRenaud/XOSD) - I thought at first that On-Screen Display would have a special method to draw on top - but this sets up a topmost window, child of the root, where all drawing happens; and it also sets up event and timer thread
.... plus a ton of other code snippets (mostly from SO), which sort of does what I want (even if I don't really fully understand all of the layers and compositing that goes on in it). I posted this as a gist: xosd_track_cursor.c since it's 700+ lines (but can post it here if needed).
Here is how the application behaves (also see full-res imgur .mp4 video)
Basically, at start, the "OSD" topmost window is set up, and it's quite smaller than the desktop window - which helps us see the window border decorations around it (ultimately, I'd make this window the same size as the desktop).
At start, the desktop pixels at the location of this window have seemingly been copied as the window background.
Once the mouse pointer enters the OSD window, there is a draw of a circle, which becomes the mask for the OSD window (which again can be seen via the window border decorations) - and this circular window follows the mouse. Then, inside it, I draw a XFillRectangle to draw a lime rectangle, and then XPutImage to draw the pixels captured from the latest mouse pointer (the video doesn't show it, but also the copied cursor changed when the normal one does, say from left_ptr to bottom_side or xterm cursor bitmaps).
So far so good - but these are the problems, and questions:
All of the draws - both the lime rectangle and the mouse pointer copy - remain on the OSD window, and are not cleared upon redraw (which is quite obvious when the mouse pointer leaves the OSD window, so there is no masking). How can I erase these previous draws each time a new state is rendered?
When I click on window to change the focus, it is obvious (especially when the mouse pointer leaves the OSD window, so there is no masking) that the desktop "background" shown in the OSD window, shows the state when the program started. How can I capture the current state of the desktop background (that is, behind the OSD window), so I can use that for clearing the OSD window in the previous step?
(I thought I could hide the OSD window, then capture the desktop at the same location with XGetImage, maybe (?) - then show the window; but show always sends Expose event, which otherwise runs the expose function that does the redraw, and so I get a bunch of recursive calls hogging the application)
The mouse pointer copy is rendered with a black background - how can I make the drawing of mouse pointer copy transparent, where it is black now?
And, a sort of a bonus question (just curious here - obviously I'd rather not have the leftovers to begin with):
I first do XFillRectangle to draw a lime rectangle, then XPutImage to draw the pixels of the mouse pointer copy. I'd expect this to show the mouse cursor copy on top of the green pixels - and it is indeed so, while the OSD window is masked with the circle. But when the OSD window is shown in full, the leftovers make it seem as if the green pixels were drawn on top of the mouse cursor copy pixels. Why is this so?
Well, I think I got somewhere - the result is in the same gist, just different revision: gist: xosd_track_cursor.c (a31e9dff5); and it looks like this:
And so, to answer my questions:
How can I erase these previous draws each time a new state is rendered?
Cannot - not in the way the previous code was set up. It was set up as a override_redirect, meaning it would stay out of the management of any window manager. Furthermore, the default bit depth was 24, meaning that transparency was not supported, meaning that to grab the desktop "behind" (to use as "clear" background), we'd had to hide and then show the window, which used to cause recursion due to reaction to Expose events.
However, I saw in How to make an OpenGL rendering context with transparent background? that using an glXCreateContext might help - and it did. However, it turns out, it was not necessary - as soon as XMatchVisualInfo successfully returned a match for 32-bit depth for the OSD window (alpha transparency supported), then it was possible to define a "fully transparent" color, via XSetForeground, as 0x00000000 (as far as I see, that is 0xAARRGGBB format) - and use that to draw directly on the window with XFillRectangle -> that manages to clear the entire OSD window transparently.
The mouse pointer copy is rendered with a black background - how can I make the drawing of mouse pointer copy transparent, where it is black now?
Turns out, also this started working as soon as the window was created with XCreateWindow using settings from XMatchVisualInfo for 32-bit depth. By that, I mean that the result of XPutImage was such, that the transparent points in the cursor image were now "see-through"/transparent - whereas previously, the result of XPutImage showed black pixels at those locations.
But when the OSD window is shown in full, the leftovers make it seem as if the green pixels were drawn on top of the mouse cursor copy pixels. Why is this so?
Apparently, I didn't remember correctly what order the pixels were drawn in; when that demo capture was taken, indeed the mouse cursor pixels were copied first, and then the green pixels were copied on top. ( which now changes the question - how come the mouse cursor was visible in that capture, at all?! but now that the overall problem is solved, I'm not that curious :) )
Otherwise, few more notes on gist: xosd_track_cursor.c (a31e9dff5): since X11 has a client/server architecture, that means the user program can only queue requests to the server, and thus all of the drawing calls are asynchronous/non-blocking - and so, when we run, say, XFillRectangle and it exits, it does not mean that the drawing of pixels has been finished - just that the request has been sent to the queue, that ends up being sent to the server. Furthermore, in spite of commands like XFlush, XSync - there is never a guarantee that we can wait for a finished drawing operation; and there is no guarantee either that the server will honor any given request.
However, the less you try to do, the bigger the probability the X Server will honor the requests. So this version of the code actually makes a smallish window, 60x60 pixels, then sets it up so it is (centrally aligned) dragged by the motion of the mouse pointer. Then, the (main) mouse pointer is simply copied in this window at the same relative location.
Finally, there is a primitive attempt to do a mouse trail, by rendering two "ghost" copies of the mouse pointer, and have them be displaced by a history of mouse motion delta vector - the effect, as is visible on the gif, is not really amazing, but at least it's there as a "proof of concept", of sorts. Also, the window is setup at start as "click-through" using XShapeCombineRectangles - meaning the OSD window doesn't pick up/handle any mouse events (clicks) directly on it, instead everything is automatically passed to the windows below it, so the interaction remains the same, as if the program was not running at all.
(Note that to get the behavior of gist: xosd_track_cursor.c (a31e9dff5) shown in the gif, you should look up the defines DEBUGPRINT and MOUSE_TRAIL, and have them uncommented when you build)

Why this BitBlt example doesn't work anymore?

I'm currently getting back to some Windows Programming using Petzold's book (5th edition).
I compiled the following example using BitBlt and it doesn't work as it is supposed to.
It should copy the Window's icon of (CxSource, CySource) size and replicate it on the whole window's surface.
What happens, in reality, using Windows 7 is that the bitmap below the window gets sourced and copied into the drawing surface i.e. hdcClient.
I don't understand why it behaves like this knowing that it's clear the DC passed to BitBlt is hdcWindow, which refers to a device context obtained via a GetWindowDC(hwnd) of the current application.
I first thought it was due to the fact the transparency mode is enabled by default, but deactivating it doesn't change anything. BitBlt seems to always take the surface below the application Window!
I don't get it! :)
Anyone knows why it works that way and how to fix it?
Making screenshots with BitBlt() did not exactly get any easier since the addition of the DWM (Desktop Window Manager, aka Aero). Petzold's sample code suffers from a subtle timing issue, it is making the screenshot too soon. It does so while Aero is still busy animating the frame, fading it into view. So you see what is behind the window, possibly already partly faded depending on how quickly the first WM_PAINT message is generated.
You can easily fix it by disabling the effect:
#include <windows.h>
#include <dwmapi.h>
#pragma comment(lib, "dwmapi.lib")
And after the CreateWindow() call:
BOOL disabled = TRUE;
DwmSetWindowAttribute(hwnd, DWMWA_TRANSITIONS_FORCEDISABLED, &disabled, sizeof(disabled));
Another tricky detail is that the first BitBlt matters, the DWM returns a cached copy afterwards that is not correctly invalidated by the animation.
This gets grittier when you need a screenshot of a window that belongs to another process. But that was already an issue before Aero, you had to wait long enough to ensure that the window was fully painted. Notable perhaps is the perf of BitBlt(), it gets bogged-down noticeably by having to do job of composing the final image from the window back-buffers. Lots of questions about that at SO, without happy answers.
It is not supposed to copy the windows icon, it is supposed to copy the windows titlebar part where the icon is located.
There are some issues with this (now 20 year old code):
GetSystemMetrics values cannot be used for window related dimensions anymore since GetSystemMetrics returns the classic sizes, not the Visual Style sizes.
Depending on the Windows version, the DWM might define the window size as something larger than your window (where it draws the window shadow and other effects).
Your example works OK on XP:
(There is a small problem because the titlebar is not square (unlike Windows 98/2000 that this example was designed for) so you see a issue in the top left where it is just white. I also modified the example slightly so it varies the HDC source location)
On a modern version of Windows it seems like the DWM or something is not able to properly emulate a simple window DC and parts of the shadow/border/effects area is part of the DC:
I don't know how to fix this but the example is pretty useless anyway, if you want to draw the window icon you should draw the HICON with DrawIconEx. If you want to draw custom non-client area stuff then you need to find more recent examples, not something that only supports the classic theme.

WPF performance issue when resizing the window with lots of controls

I have a WPF window that contains a fancy image with roughly 200 controls (derived from buttons), all of which use one of my 5 templates (paths, shadow effects, etc). Agreed, it is a heavy window to draw. I can live with that.
My problem comes from resizing the window. Maximize/Restore take about 1-2 seconds, but manually dragging the bottom-left corner causes the system to hang for about 5-10 seconds. In that delay, the window is black & contains partial leftovers until the final result is shown. It looks amateurish and that, I can't live with.
Remote connection : using a remote account, I found that the window resize always takes 1-2 seconds, but doesn't draw the "intermediate" stages while I'm dragging the window borders. The result is as snappy as I would expect.
My conclusion is this: It's the redraws during the resize that are bottlenecks.
The inevitable question is this : how can I prevent redrawing the window until the resize is finished?
Thanks in advance for any ideas...
#Seb: I'm beginning to think WPF is
not designed for interfaces that go
beyond 2-3 controls at a time
Visual Studio 2010 and Expression Blend should be good counterexamples. Though Visual Studio sometimes freezes the bottleneck is definitely not in the WPF rendering.
#Seb: The inevitable question is this : how
can I prevent redrawing the window
until the resize is finished?
Simply set the window's content visibility to Visibility.Collapsed before the resize/maximize and make it visible afterwards. Though I think you asked the wrong question. Here is the right one
How to make my controls measure/arrange extremely fast?
And to answer it you should take a look at your code. Maybe you intensively use dependency properties in the measuring/arrange algorithm? Or maybe you picked wrong panels (e.g. Grid is slower than Canvas)? Or maybe... I stop guessing here :).
By the way, it's always better to launch your app under profiler and prove the bottleneck rather than assuming the place where it might be. Check Eqatec Profiler it's free yet powerful enough. VS 2010 also offers nice profiling features, though it's far from being free. And you may want to check WPF Performance Suite.
Hope this helps.
Let me know how this works... I am assuming that your root visual item is stretching to horizontally and vertically to fill your window with auto height/width. Get rid of the Auto height/width. On app start up set the dimensions of the root element. There is a FrameworkElements have a size changed event. Register for this on your Application.Current.MainWindow (maybe be a typo, that was from memory). Whenever this event fires, start a timer with a small interval. If you get another resize while the timer is running, ignore it and reset the timer. Once the timer fires, you now know the new size the user desires and that they have (at least for a short period) stopped resizing the window.
Hope that helps!
From Ragepotato's answer and your comment about needing to see roughly what the interface would look like while resizing, as long as you don't have your objects dynamically re-locating themselves (like a Wrap Panel) - you could take a screenshot of the window contents and fill your frame with it.
Set it to stretch both height and width, and you'd get a (slightly fuzzy) idea of what a particular size would be. It wouldn't be live while resizing, but for those few seconds that probably wouldn't matter..

Gtk: get usable area of each monitor (excluding panels)

Using gdk_screen_get_monitor_geometry, I can get the total area in pixels and the relative position of each monitor, even when there are two or more used as a single screen.
However, I want to get the usable area (that is, excluding panels) of each monitor. The only thing I have found is _NET_WORKAREA, but that is one giant area stretching across all monitors. Depending on the resolution and arrangement, there may be panels inside this area.
How can I get the actual usable area of each monitor? Ideally, using only Gtk/Gdk, nothing X11-specific.
The following approach is a bit convoluted, but it is what I'd use. It should be robust even when there is complex interaction between the window manager and GTK+ when a window is mapped -- for example, when some of the panels are automatically hidden.
The basic idea is to create a transparent decorationless maximized window for each screen, obtain its geometry (size and position) when it gets mapped (for example, using a map-event callback), and immediately destroy them. That gets you the usable area within each screen. You can then use your existing gdk_screen_get_monitor_geometry() approach to determine how the usable area is split between monitors, if any.
In detail:
Use gdk_display_get_default() to get the default display, then gdk_display_get_n_screens() to find out how many screens it has.
Create a new window for each screen using gtk_window_new(), moving the windows to their respective screens using gtk_window_set_screen(). Undecorate the windows using gtk_window_set_decorated(,FALSE), maximuze them using gtk_window_maximize(,TRUE), and make them transparent using gtk_window_set_opacity(,0.0). Connect the map-event signal to a callback handler (using g_signal_connect()). Show the window using gtk_widget_show().
The signal handler needs to call gtk_window_get_position() and/or gtk_window_get_size() to get the position and/or size of the newly-mapped window, and then destroy the window using gtk_widget_destroy().
Note that in practice, you only need one window. I would personally use a simple loop. I suspect that due to window manager oddities/bugs, it is much more robust to create a new window for each screen, rather than just move the same window between screens. It turns out it is easier, too, as you can use a single simple callback function to obtain the usable area for each screen.
Like I said, this is quite convoluted. On the other hand, a standard application should not care about the screen sizes; it should simply do what the user or window manager asks. Because of that, I would not be surprised if there are no better facilities to find out this information. Screen size may change at any point, for example if the user rotates their display, or changes the display resolution.
in the end I ended up using xlib directly, various "tricks" like the one suggested above ended up eventually failing in the long run often with odd corner cases and never followed the KISS principle.
The solution I used is in the X-Tile code base.

Using DwmExtendFrameIntoClientArea causes window to oddly resize

I have an application that uses DwmExtendFrameIntoClientArea to draw a glass effect area at the top of my .NET Form. A strange side effect is that when you maximize and restore the window it grows taller. So if you keep maximizing/restoring it just keeps getting taller! Remove the call to DwmExtendFrameIntoClientArea to remove that nice effect and it stops going wrong.
Any ideas?

Resources