Create Vulkan surface for only a portion of a window - c

We have an application which has a window with a horizontal toolbar at the top. The windows-level handle we pass to Vulkan to create the surface ends up including the area behind the toolbar i.e. Vulkan is completely unaware of the toolbar and the surface includes the space "behind" it.
My question is, can a surface represent only a portion of this window? We obviously need not process data for the pixels that lie behind the toolbar, and so want to avoid creating a frame buffer, depth buffer etc. bigger than necessary.
I fully understand that I can accomplish this visually using a viewport which e.g. has an origin offset and height compensation, however to my understanding the frame buffer actually still contains information for pixels the full size of the surface (e.g. 800x600 for an 800x600 client-area window) even if I am only rendering to a portion of that window. The frame buffer then gets "mapped" and therefore squished to the viewport area.
All of this has sort of left me wondering what the purpose of a viewport is. If it simply defines a mapping from your image buffer to an area in the surface, is that not highly inefficient if your framebuffer contains considerably more pixels than the area it is being mapped to? Would it not make sense to rather section of portions in your application using e.g. different windows HWNDs FIRST, and then create different surfaces from then onwards?
How can I avoid rendering to an area bigger than necessary?

The way this gets handled for pretty much every application is that the client area of a window (ie: the stuff that isn't toolbars and the like) is a child window of the main frame window. When the frame is resized, you resize the client window to match the new client area (taking into account the new sizes of the toolbars/etc).
It is this client window which should have a Vulkan surface created for it.

Related

Detecting X11 window resize towards top/left

I'm working on a mapping application and I'm trying to get resizing in X11 working the way I'd like. Conceptually, I'd like my window to be a viewport onto some real-valued space where my data lives. When you resize the window, the size of your view onto this real-valued world should change accordingly.
What this means is that when resizing the window, rather than shrinking/stretching the data, more or less of the underlying window becomes visible. It's easy to handle the case when the window is resized by growing/shrinking on the bottom/right, but I'd like to handle the case when it's resized on the top/left as well.
This is trickier, because a top/left resize also moves the window's origin as well as it's dimensions. I need to detect the change in the origin so that I can compensate to keep my data centered as the window is resized.
Is there a robust way to get the absolute coordinates of a window in X11? The coordinates that X11 reports directly through ConfigureNotify and XWinAttributes are dodgy due to window manager reparenting.
In Xlib use XTranslateCoordinates to translate the coordinate (0,0) in your viewport window into coordinates of the root window. This also covers the case of a stacking window manager messing with your window position.

Difference between DisplayContext, displaySurface & displayBuffer?

Normally working on graphics and display, we are encountering wordssuch as Displaybuffer, DisplaySurface & DisplayContext? what is the different between these terms?
It depends on the system these are general terms and are often interchanged. But in general
A DisplaySurface is a surface you'd perform operations on i.e. draw a line, circle etc on. A display surface is the physical screen surface you are writing on.
But, although you'd write on a display surface in many cases you'd have a display buffer so that when you draw on the surface, you actually draw on the display buffer so that the user doesn't see the drawing happening and when you've finished drawing you flip the display buffer onto the surface so that the drawing appears instantaneously
A display context is the description of the physical charecteristics of the drawing surface e.g. width, height, color depth and so on. In win32 for example you obtain a device context for a particular piece of hardware - a printer or screen, but then you draw on this device context so it is also the display surface. Likewise you can obtain a device context for an offscreen bitmap (a display buffer). So the terms can blur a bit.

Retrieve the entire rectangle of a scrollable window

I'm trying to retrieve the entire rectangle of a scrollable window using the WIN32 API. I thought that GetClientRect would return what I need, but that function appears to return only the current viewport. Is there a specific function call that returns the entire scrollable region as a RECT or must I call GetScrollRange to calculate the region myself?
It doesn't work like that. As far as Windows is concerned, a scrollable window isn't a small viewport onto a larger region whose dimensions you can set or retrieve, it's just a rectangle with a scroll bar control at the edge. It's up to you to determine the appearance of the scroll bar by calculating the portion of the notional region that is visible within the viewport provided by the window, and to paint the window contents accordingly.
It sounds as if that particular window is using virtual scrolling. Even GetScrollRange doesn't necessarily tell you the dimensions, because there's no requirement that a delta of 1 on the scrollbar equals 1 pixel, in fact in many cases it is one record, one row, etc.
Another thing to try is to enumerate all the child windows, and find the minimum and maximum x and y coordinates (don't forget to include the width and height of each child window). Of course this won't help if the content is directly drawn and not a hierarchy of windows.

Resizing a drawing area in GTK

My application performs a 90 degree rotation on a drawing area, so the width and height of the drawing area need to be swapped.
How can I resize the drawing area with GTK in a way so that the new width and height are actually enforced, not just requested?
Width/height cannot be enforced by a widget, they are determined by its container only. Widget can only request given dimension and its container will allocate the requested area or more (or even less, but all standard containers won't do this).
So, the answer would completely depend on how the area is packed and into what container. If your window (as in GtkWindow) doesn't include anything expandable, setting it to be non-resizable mode will achieve what you want. Otherwise, please specify how the area is packed and/or what other widgets are in the toplevel.

Is there a way to capture a bitmap from a WPF window using native C++?

Imagine a document window in a MDI application which contains a child WPF window, say a sidebar for example. How can one get a bitmap containing both the WPF pixels AND the GDI (non-wpf) pixels?
I've discovered that when making my thumbnail preview for the Win7 taskbar app icon hover, I get black in the parts of the preview where the WPF pixels should be. My current method simply grabs a bitmap capture of the document window. Then I get a DC for the preview, make a memory DC from it and select my bitmap into it. Then I do some size adjustments and bitblt the memory dc to the real dc. I'm guessing that the BitBlt operation doesn't take into account the fact that the WPF pixels are hardware accelerated and therefore need to be grabbed from the graphics hardware. All the stuff in GDI is managed just fine, though and when there's no WPF child windows, the preview image looks fine.
I'm wondering if it's at all possible to grab a bitmap of the WPF window from native C++. Then I can blt that onto the black area of the previous preview.
Maybe I'm not understanding your current approach correctly, but could you do a BitBlt() from the screen DC to your memory DC? You'd need to get the screen rect of your window, but that shouldn't be too bad.
To solve this, I had to create an abstract class in native code containing a virtual method to get the bitmap that was implemented in C++/CLI. In the managed implementation, I used .NET's RenderTargetBitmap class to get a bitmap capture of the WPF window and then I filled up the passed in CBitmap object (see How to get an BITMAP struct from a RenderTargetBitmap in C++/CLI?). In the unmanaged caller routine, I used the virtual method to obtain the Bitmap.
In short, there was no way to get the bitmap by simply using unmanaged C++ since WPF and GDI really don't work together for all practical purposes.

Resources