We have an application which has a window with a horizontal toolbar at the top. The windows-level handle we pass to Vulkan to create the surface ends up including the area behind the toolbar i.e. Vulkan is completely unaware of the toolbar and the surface includes the space "behind" it.
My question is, can a surface represent only a portion of this window? We obviously need not process data for the pixels that lie behind the toolbar, and so want to avoid creating a frame buffer, depth buffer etc. bigger than necessary.
I fully understand that I can accomplish this visually using a viewport which e.g. has an origin offset and height compensation, however to my understanding the frame buffer actually still contains information for pixels the full size of the surface (e.g. 800x600 for an 800x600 client-area window) even if I am only rendering to a portion of that window. The frame buffer then gets "mapped" and therefore squished to the viewport area.
All of this has sort of left me wondering what the purpose of a viewport is. If it simply defines a mapping from your image buffer to an area in the surface, is that not highly inefficient if your framebuffer contains considerably more pixels than the area it is being mapped to? Would it not make sense to rather section of portions in your application using e.g. different windows HWNDs FIRST, and then create different surfaces from then onwards?
How can I avoid rendering to an area bigger than necessary?
The way this gets handled for pretty much every application is that the client area of a window (ie: the stuff that isn't toolbars and the like) is a child window of the main frame window. When the frame is resized, you resize the client window to match the new client area (taking into account the new sizes of the toolbars/etc).
It is this client window which should have a Vulkan surface created for it.
I am trying to make an application which will graphically repeat the mouse pointer, so I can ultimately make a mouse trail program, for Ubuntu 18.04 - and it seems, the way to do it is via X11/Xlib - although, these days I don't even know, as my machine says also wayland:
$ loginctl | while IFS= read line; do echo "$line"; if [[ $line == *"tty"* ]]; then sessnum=$(echo "$line" | awk '{print $1;}'); echo sessnum: $sessnum\; $(loginctl show-session $sessnum -p Type); fi; done
SESSION UID USER SEAT TTY
c1 121 gdm seat0 tty1
sessnum: c1; Type=wayland
2 1000 administrator seat0 tty2
sessnum: 2; Type=x11
2 sessions listed.
Regardless, I managed to put together an unholy assemblage of:
https://keithp.com/blogs/Cursor_tracking/ - which sets up the program for capturing raw mouse events, so the mouse pointer position can be extracted (and a redraw triggered) whenever the mouse pointer position changes
xosd.c (via https://github.com/AndreRenaud/XOSD) - I thought at first that On-Screen Display would have a special method to draw on top - but this sets up a topmost window, child of the root, where all drawing happens; and it also sets up event and timer thread
.... plus a ton of other code snippets (mostly from SO), which sort of does what I want (even if I don't really fully understand all of the layers and compositing that goes on in it). I posted this as a gist: xosd_track_cursor.c since it's 700+ lines (but can post it here if needed).
Here is how the application behaves (also see full-res imgur .mp4 video)
Basically, at start, the "OSD" topmost window is set up, and it's quite smaller than the desktop window - which helps us see the window border decorations around it (ultimately, I'd make this window the same size as the desktop).
At start, the desktop pixels at the location of this window have seemingly been copied as the window background.
Once the mouse pointer enters the OSD window, there is a draw of a circle, which becomes the mask for the OSD window (which again can be seen via the window border decorations) - and this circular window follows the mouse. Then, inside it, I draw a XFillRectangle to draw a lime rectangle, and then XPutImage to draw the pixels captured from the latest mouse pointer (the video doesn't show it, but also the copied cursor changed when the normal one does, say from left_ptr to bottom_side or xterm cursor bitmaps).
So far so good - but these are the problems, and questions:
All of the draws - both the lime rectangle and the mouse pointer copy - remain on the OSD window, and are not cleared upon redraw (which is quite obvious when the mouse pointer leaves the OSD window, so there is no masking). How can I erase these previous draws each time a new state is rendered?
When I click on window to change the focus, it is obvious (especially when the mouse pointer leaves the OSD window, so there is no masking) that the desktop "background" shown in the OSD window, shows the state when the program started. How can I capture the current state of the desktop background (that is, behind the OSD window), so I can use that for clearing the OSD window in the previous step?
(I thought I could hide the OSD window, then capture the desktop at the same location with XGetImage, maybe (?) - then show the window; but show always sends Expose event, which otherwise runs the expose function that does the redraw, and so I get a bunch of recursive calls hogging the application)
The mouse pointer copy is rendered with a black background - how can I make the drawing of mouse pointer copy transparent, where it is black now?
And, a sort of a bonus question (just curious here - obviously I'd rather not have the leftovers to begin with):
I first do XFillRectangle to draw a lime rectangle, then XPutImage to draw the pixels of the mouse pointer copy. I'd expect this to show the mouse cursor copy on top of the green pixels - and it is indeed so, while the OSD window is masked with the circle. But when the OSD window is shown in full, the leftovers make it seem as if the green pixels were drawn on top of the mouse cursor copy pixels. Why is this so?
Well, I think I got somewhere - the result is in the same gist, just different revision: gist: xosd_track_cursor.c (a31e9dff5); and it looks like this:
And so, to answer my questions:
How can I erase these previous draws each time a new state is rendered?
Cannot - not in the way the previous code was set up. It was set up as a override_redirect, meaning it would stay out of the management of any window manager. Furthermore, the default bit depth was 24, meaning that transparency was not supported, meaning that to grab the desktop "behind" (to use as "clear" background), we'd had to hide and then show the window, which used to cause recursion due to reaction to Expose events.
However, I saw in How to make an OpenGL rendering context with transparent background? that using an glXCreateContext might help - and it did. However, it turns out, it was not necessary - as soon as XMatchVisualInfo successfully returned a match for 32-bit depth for the OSD window (alpha transparency supported), then it was possible to define a "fully transparent" color, via XSetForeground, as 0x00000000 (as far as I see, that is 0xAARRGGBB format) - and use that to draw directly on the window with XFillRectangle -> that manages to clear the entire OSD window transparently.
The mouse pointer copy is rendered with a black background - how can I make the drawing of mouse pointer copy transparent, where it is black now?
Turns out, also this started working as soon as the window was created with XCreateWindow using settings from XMatchVisualInfo for 32-bit depth. By that, I mean that the result of XPutImage was such, that the transparent points in the cursor image were now "see-through"/transparent - whereas previously, the result of XPutImage showed black pixels at those locations.
But when the OSD window is shown in full, the leftovers make it seem as if the green pixels were drawn on top of the mouse cursor copy pixels. Why is this so?
Apparently, I didn't remember correctly what order the pixels were drawn in; when that demo capture was taken, indeed the mouse cursor pixels were copied first, and then the green pixels were copied on top. ( which now changes the question - how come the mouse cursor was visible in that capture, at all?! but now that the overall problem is solved, I'm not that curious :) )
Otherwise, few more notes on gist: xosd_track_cursor.c (a31e9dff5): since X11 has a client/server architecture, that means the user program can only queue requests to the server, and thus all of the drawing calls are asynchronous/non-blocking - and so, when we run, say, XFillRectangle and it exits, it does not mean that the drawing of pixels has been finished - just that the request has been sent to the queue, that ends up being sent to the server. Furthermore, in spite of commands like XFlush, XSync - there is never a guarantee that we can wait for a finished drawing operation; and there is no guarantee either that the server will honor any given request.
However, the less you try to do, the bigger the probability the X Server will honor the requests. So this version of the code actually makes a smallish window, 60x60 pixels, then sets it up so it is (centrally aligned) dragged by the motion of the mouse pointer. Then, the (main) mouse pointer is simply copied in this window at the same relative location.
Finally, there is a primitive attempt to do a mouse trail, by rendering two "ghost" copies of the mouse pointer, and have them be displaced by a history of mouse motion delta vector - the effect, as is visible on the gif, is not really amazing, but at least it's there as a "proof of concept", of sorts. Also, the window is setup at start as "click-through" using XShapeCombineRectangles - meaning the OSD window doesn't pick up/handle any mouse events (clicks) directly on it, instead everything is automatically passed to the windows below it, so the interaction remains the same, as if the program was not running at all.
(Note that to get the behavior of gist: xosd_track_cursor.c (a31e9dff5) shown in the gif, you should look up the defines DEBUGPRINT and MOUSE_TRAIL, and have them uncommented when you build)
I'm trying to retrieve the entire rectangle of a scrollable window using the WIN32 API. I thought that GetClientRect would return what I need, but that function appears to return only the current viewport. Is there a specific function call that returns the entire scrollable region as a RECT or must I call GetScrollRange to calculate the region myself?
It doesn't work like that. As far as Windows is concerned, a scrollable window isn't a small viewport onto a larger region whose dimensions you can set or retrieve, it's just a rectangle with a scroll bar control at the edge. It's up to you to determine the appearance of the scroll bar by calculating the portion of the notional region that is visible within the viewport provided by the window, and to paint the window contents accordingly.
It sounds as if that particular window is using virtual scrolling. Even GetScrollRange doesn't necessarily tell you the dimensions, because there's no requirement that a delta of 1 on the scrollbar equals 1 pixel, in fact in many cases it is one record, one row, etc.
Another thing to try is to enumerate all the child windows, and find the minimum and maximum x and y coordinates (don't forget to include the width and height of each child window). Of course this won't help if the content is directly drawn and not a hierarchy of windows.
I have a window with the following:
Background="{x:Null}" AllowsTransparency="True" WindowStyle="None"
Dragging the window by hand beyond the left, right and bottom limits of the screen results in a predictably cropped window. However this behaviour is not the same for dragging it above the top limit. Instead of cropping, it pushes it back down as if there's an automatic If Window.Top < 0 Then Window.Top = 0.
This is probably in place so that users don't lose a Windows titlebar (which is the standard way to drag windows around, and losing sight of that effectively makes it undraggable). I don't need that as my entire window is draggable via Me.DragMove().
So, how do I let a window be dragged above the top limit of the screen?
(This is unrelated to Aero Snap which only occurs if the mouse touches the borders. I'm trying to move the window beyond visible bounds)
The DragMove function do not allow you to drag a window above the screen. You need to manually move the window, for example:
How do I move a wpf window into a negitive top value?
I'm trying to modify the default graph viewer of the Graph# library because its user interface is awful (just try dragging a node outside of the boundaries, you'll see!)
The basic setup is this: there is a GraphCanvas control (inherited from Panel) which has children of Vertex and Edge control types. What I want to achieve is:
GraphCanvas has scroll bars if the contents do not fit in the screen;
GraphCanvas can also be scrolled by "dragging" it (just click on an empty space and drag);
GraphCanvas can be zoomed in and out (via CTRL+mouse wheel);
Vertices can be dragged around. If a vertex is dragged outside the current boundaries of GraphCanvas, the boundaries are increased. The scroll bars should reflect this, however the current viewport should not scroll away while the vertex is being dragged . The same goes if dragging a vertex reduces the boundaries of GraphCanvas - it should stay the same size until the drag operation is finished and resize only then. Automatically scrolling the viewport during a drag operation is awfully confusing and easily introduces dragging errors. See the original implementation if you want to know what I mean.
Although I've got a fair bit of experience with .NET, I'm still a complete beginner in WPF. My current attempt is (in the measure/arrange layout phase) to give each vertext the XY coordinate it desires (even if negative) and implement zooming/scrolling by handling mouse events on the GraphCanvas and modifying the RenderTransform property. Dragging just changes the XY coordinates on the specific vertex (probably triggering the re-layout of the whole thing which would be nice to avoid too). Scrollbars are implemented by placing the GraphCanvas inside a ScrollViewer and implementing IScrollInfo on the GraphCanvas.
Unfortunately there seems to be a problem: I can get mouse events on the GraphCanvas itself only if it has a background at the point. That would be OK, I want a white background anyway, but in the negative coordinates of the GraphCanvas it does not draw the background - and thus does not respond to mouse events.
I'm also wondering if I'm doing the Right Thing by allowing all my child controls (vertices and edges) to go into negative coordinates. How would you implement this?
Added: To clarify about the background problem check out the following screenshot:
(source: valts.21.lv)
What you see here is a simple Windows Forms form with a WPF Host control on it. That has a ScrollViewer in it, and the ScrollViewer has the GraphCanvas in it. The GraphCanvas contains 4 vertices and 6 edges.
The GraphCanvas is stretched to fill the ScrollViewer. But since some of the vertices are at negative coordinates, it has a RenderTransform applied which simply shifts everything to the right (TranslateTransform). It also has a white background brush.
Note the gray area on the left. That's still a part of the GraphCanvas, but the background brush somehow doesn't exted there. Also, if I left-click there with my mouse (not on a node, but on the gray area), I do NOT get an event. If I left-click on the white area, I get all events just fine.
Call CaptureMouse on canvas.mouseDown and ReleaseMouseCapture on mouse up. Also, if you set your canvas background to transparent it will still be hit testable
You can attach a 'Draggable' behavior to each element.