Windows Runtime equivalent to WriteableBitmap's AddDirtyRect - wpf

I'm building an app where I would like to redraw the image on screen around a user's finger touch. System.Windows.Media.Imaging.WriteableBitmap has a method AddDirtyRect(Int32Rect dirtyRect) to indicate to indicate changes my code has made to the back buffer so that the whole image needn't be redrawn. Its Windows Runtime equivalent, the Windows.UI.Xaml.Media.Imaging.WriteableBitmap class, does not.
Can I tell the system which part of the screen to redraw as the result of code changing a Windows.UI.Xaml.Media.Imaging.WriteableBitmap?

No, this API isn't there. You could use a secondary patch bitmap to update only a portion of the rendered output. If you need more control over what gets pushed out to the buffers you'd need to use a SwapChainPanel and DirectX.

Related

How to make sure Android keyboard isn't scrolled to selected line in multi-line TextField?

This question is related to this other one (Android). A sample test case was also provided here
Basically, I can get past the "glitch" of losing the bottom screen under the keyboard that occurs sometimes when a single line TextField is focused by setting the TextField's bottom padding and making it a layer
But when the same glitch occurs to a multi-line TextField, each time the cursor is moved to a different line the keyboard follows the current line and hides everything underneath. I've been looking at TextArea and Component but I can't see anything there that stops this behavior. My "trick" of making the TextField a layer with bottom padding doesn't work in multi-line mode. I'm out of options, could this be enabled or alternatively is there some magic method somewhere I am missing?
Also, I've checked that calling getComponentForm().getInvisibleAreaUnderVKB() returns 0 when the glitch occurs
I think you need to re-open the applicable issue. This code is very platform specific as the virtual keyboard behavior is handled 100% within the Android port.
Android doesn't implement getInvisibleAreaUnderVKB() since the VKB doesn't work that way in Android. It resizes the screen instead to provide the additional space. It will generally try to get the top area where your cursor is. That's the chief goal.
When the screen is empty that might look problematic but when your screen is full of data we'd rather see the data than have the full text component in view. Unfortunately, the native editing code has no way to distinguish between the two cases. We might be able to come up with a workaround but with these things there are often issues/regressions.
Solution to prevent this consists in setting the Form's setFormBottomPaddingEditingMode(true);. Easy fix! đź‘Ť

Precision timing of GDK3/GTK3 window update

I have an application written in C using GTK (although the language is probably unimportant for this question).
This application has a fullscreengtk_window with a single gtk_drawing_area. For the drawing area, I have registered a tick callback via gtk_widget_add_tick_callback which just calls gtk_widget_queue_draw every tick. Inside the drawing area draw callback, I change the color of the entire window at regular intervals (e.g., from black to white at 1Hz).
Say that in this call to the draw callback I want to change the window from black to white. I would like to know the precise time (down to the nearest ms) that the change is actually drawn on the screen (ideally in the same units as CLOCK_MONOTONIC). I don't think this is the same thing as the GdkFrameClock available in the tick callback, which, as I understand it, is about the time of the frame, not the time when the frame is actually displayed on the screen.
If I just measure the CLOCK_MONOTONIC time in the drawing callback, and then use a photo-diode to measure when the actual change is via an attached A2D, the actual change is the display is understandably delayed by a number of refresh intervals (in my case, 3 screen refreshes).
Just as a summary: if I am in a GTK widget draw callback, is there any way to know the time when the display will actually be shown on the monitor in the units of CLOCK_MONOTONIC? Or alternatively, is there a way that I can block a separate thread until a specific redraw that I care about is actually displayed on the screen (a function I can write like wait_for_screen_flip())?
Update: Ideally, the same solution would work for any Linux compositor (X11 or Wayland), which is why I am hoping for a GTK/GDK solution, where the compositor is abstracted away.
Similarly to Uli's answer of the Present extension and PresentCompleteNotify for X11, Wayland has a similar protocol called wp_presentation_feedback:
https://cgit.freedesktop.org/wayland/wayland-protocols/tree/stable/presentation-time/presentation-time.xml
This protocol allows the Wayland compositor to inform clients when their content was actually displayed (turned to light). It is independent of the actual buffer mechanism used (EGL/SHM/etc). To use it, you call wp_presentation_get_feedback before wl_surface_commit; when the commit has completed, a presented event will be sent to the client from the new wp_presentation_feedback object, or discarded if it was never shown.
Presentation feedback is currently implemented in Weston; it is not yet implemented in Mutter, and I don't believe it's implemented in KWin either. GTK+ plans to support it when it becomes available in Mutter, but I don't have any great insight as to how it would be exposed through the GTK+ API.
That being said, if you can get access to the Wayland display, it's possible that you could use the interface directly yourself.
I just came across https://developer.gnome.org/gdk3/stable/gdk3-GdkFrameTimings.html#gdk-frame-timings-get-presentation-time which seems to do just like what you want and is part of Gdk. I do not know how to use it nor have I seen some example of it, but https://developer.gnome.org/gdk3/stable/gdk3-GdkFrameTimings.html#gdk3-GdkFrameTimings.description says
The information in GdkFrameTimings is useful for precise synchronization of video with the event or audio streams, and for measuring quality metrics for the application’s display, such as latency and jitter.
Take a look at https://cgit.freedesktop.org/xorg/proto/presentproto/tree/presentproto.txt. Specifically, you want PresentCompleteNotify events. Note that these can only tell you later when presentation actually happened, so (I think) you will not know ahead of time when this is (but you could perhaps guess based on recent notifies?).
Note that this is
a relatively new X11 extension, so might not actually be supported everywhere
depends on the driver used (and likely lots of other factors) for the quality of data
cannot be used from GTK since it requires a different way to display to the screen (you draw to a Pixmap and then use PresentPixmap to make it visible and ask for a notify)
Also note that this extension provides lots of other things. You can for example say "please display at time ". Just read the protocol specification from start to end. :-)

Why this BitBlt example doesn't work anymore?

I'm currently getting back to some Windows Programming using Petzold's book (5th edition).
I compiled the following example using BitBlt and it doesn't work as it is supposed to.
It should copy the Window's icon of (CxSource, CySource) size and replicate it on the whole window's surface.
What happens, in reality, using Windows 7 is that the bitmap below the window gets sourced and copied into the drawing surface i.e. hdcClient.
I don't understand why it behaves like this knowing that it's clear the DC passed to BitBlt is hdcWindow, which refers to a device context obtained via a GetWindowDC(hwnd) of the current application.
I first thought it was due to the fact the transparency mode is enabled by default, but deactivating it doesn't change anything. BitBlt seems to always take the surface below the application Window!
I don't get it! :)
Anyone knows why it works that way and how to fix it?
Making screenshots with BitBlt() did not exactly get any easier since the addition of the DWM (Desktop Window Manager, aka Aero). Petzold's sample code suffers from a subtle timing issue, it is making the screenshot too soon. It does so while Aero is still busy animating the frame, fading it into view. So you see what is behind the window, possibly already partly faded depending on how quickly the first WM_PAINT message is generated.
You can easily fix it by disabling the effect:
#include <windows.h>
#include <dwmapi.h>
#pragma comment(lib, "dwmapi.lib")
And after the CreateWindow() call:
BOOL disabled = TRUE;
DwmSetWindowAttribute(hwnd, DWMWA_TRANSITIONS_FORCEDISABLED, &disabled, sizeof(disabled));
Another tricky detail is that the first BitBlt matters, the DWM returns a cached copy afterwards that is not correctly invalidated by the animation.
This gets grittier when you need a screenshot of a window that belongs to another process. But that was already an issue before Aero, you had to wait long enough to ensure that the window was fully painted. Notable perhaps is the perf of BitBlt(), it gets bogged-down noticeably by having to do job of composing the final image from the window back-buffers. Lots of questions about that at SO, without happy answers.
It is not supposed to copy the windows icon, it is supposed to copy the windows titlebar part where the icon is located.
There are some issues with this (now 20 year old code):
GetSystemMetrics values cannot be used for window related dimensions anymore since GetSystemMetrics returns the classic sizes, not the Visual Style sizes.
Depending on the Windows version, the DWM might define the window size as something larger than your window (where it draws the window shadow and other effects).
Your example works OK on XP:
(There is a small problem because the titlebar is not square (unlike Windows 98/2000 that this example was designed for) so you see a issue in the top left where it is just white. I also modified the example slightly so it varies the HDC source location)
On a modern version of Windows it seems like the DWM or something is not able to properly emulate a simple window DC and parts of the shadow/border/effects area is part of the DC:
I don't know how to fix this but the example is pretty useless anyway, if you want to draw the window icon you should draw the HICON with DrawIconEx. If you want to draw custom non-client area stuff then you need to find more recent examples, not something that only supports the classic theme.

How can I copy contents of one window to another using Xlib?

I want to copy the contents of an existing Window to my own Window using Xlib. I have tried XCopyArea and it refuses to copy between two Windows. I have also tried XGetImage and XPutImage and it's also failing.
What's the best way to copy the graphics contents of a Window to my own?
Part II:
Based on information below, I was able to get XCopyArea and XGetImage to work. The reason it wasn't working was difference in depth of source and destination Window. I was surprised to learn that different Windows have different depth on my desktop.
But I still have limited success with XCopyArea. If I start copying from the top of certain Windows, like Google Chrome, it doesn't copy the full area, just the title bar. XGetImage works fine in those cases. Any clue on why XCopyArea won't copy beyond the title bar of some Windows?
XCopyArea should be fine.
Note that this will only copy into the foreground of the destination - maybe it is being drawn then erased?
Without code I can speculate:
If it is failing, have you checked that the windows definitely have the same root and depth?
Also make sure you review the X Window coordinate system. Maybe try copying so that the corner of your Copy is in the centre of the Destination to see if you can get anything.
You normally want some way of handling Expose events in the destination window so you can do a refresh.
I'd recommend creating a Pixmap as an intermediate. Both Pixmap and Window are Drawable.
Use XCopyArea to copy into the Pixmap.
Then use XSetWindowBackgroundPixmap to actually render the image. Setting the background means you can then ignore any need for handling Expose events to redraw the image.

Gtk: get usable area of each monitor (excluding panels)

Using gdk_screen_get_monitor_geometry, I can get the total area in pixels and the relative position of each monitor, even when there are two or more used as a single screen.
However, I want to get the usable area (that is, excluding panels) of each monitor. The only thing I have found is _NET_WORKAREA, but that is one giant area stretching across all monitors. Depending on the resolution and arrangement, there may be panels inside this area.
How can I get the actual usable area of each monitor? Ideally, using only Gtk/Gdk, nothing X11-specific.
The following approach is a bit convoluted, but it is what I'd use. It should be robust even when there is complex interaction between the window manager and GTK+ when a window is mapped -- for example, when some of the panels are automatically hidden.
The basic idea is to create a transparent decorationless maximized window for each screen, obtain its geometry (size and position) when it gets mapped (for example, using a map-event callback), and immediately destroy them. That gets you the usable area within each screen. You can then use your existing gdk_screen_get_monitor_geometry() approach to determine how the usable area is split between monitors, if any.
In detail:
Use gdk_display_get_default() to get the default display, then gdk_display_get_n_screens() to find out how many screens it has.
Create a new window for each screen using gtk_window_new(), moving the windows to their respective screens using gtk_window_set_screen(). Undecorate the windows using gtk_window_set_decorated(,FALSE), maximuze them using gtk_window_maximize(,TRUE), and make them transparent using gtk_window_set_opacity(,0.0). Connect the map-event signal to a callback handler (using g_signal_connect()). Show the window using gtk_widget_show().
The signal handler needs to call gtk_window_get_position() and/or gtk_window_get_size() to get the position and/or size of the newly-mapped window, and then destroy the window using gtk_widget_destroy().
Note that in practice, you only need one window. I would personally use a simple loop. I suspect that due to window manager oddities/bugs, it is much more robust to create a new window for each screen, rather than just move the same window between screens. It turns out it is easier, too, as you can use a single simple callback function to obtain the usable area for each screen.
Like I said, this is quite convoluted. On the other hand, a standard application should not care about the screen sizes; it should simply do what the user or window manager asks. Because of that, I would not be surprised if there are no better facilities to find out this information. Screen size may change at any point, for example if the user rotates their display, or changes the display resolution.
in the end I ended up using xlib directly, various "tricks" like the one suggested above ended up eventually failing in the long run often with odd corner cases and never followed the KISS principle.
The solution I used is in the X-Tile code base.

Resources