Gtk+/X11: semitransparent windows with and without composite managers? - c

I need some code for making my window (and preferably all widgets on it) semitransparent.
I know i can play around with gtk_window_set_opacity(), but it works only when composite manager is running, but what if not?
I've googled a lot, found lots of code that mostly doesn't even compile, doesn't work or just a proof of concept. No fulfilling solution. I don't want to mess with X11 Xlib awful API (I just don't have time to learn it).
Where to get such library/code snippet?

There's no good answer to this (which is a good part of why compositing managers were invented). If you could already do this, people wouldn't have invented the whole compositing manager mechanism.
The only sort-of answer, used in old "transparent terminals" and the like, is based on making screenshots of the stuff underneath the window and then painting the screenshot in your own window. This is an Xlib-involving mess, hard to get mostly right, impossible to get completely correct, and inefficient. Still, you could do it perhaps. Look at old revisions of terminals supporting transparency, I think VTE used to have this code, ZVT widget certainly did. So did the Enlightenment terminal for example.
But really the way to go is to just fall back to no transparency for users without a CM.

While modern X11 servers do support RGBA visuals this doesn't mean, they'll do alpha blending. X11 operates on the model, that a window is a mask on a single shared framebuffer. Z ordering may clip parts of a window so these areas are not drawn to at all.
To enable transparency a compositing manager must redirect the windows to off-screen rendering, then compose the final image you see on the screen from those off-screen rendered parts. The XDamage extension is used to keep track of which windows need re-compositing.

Related

When using a DirectX-based API with WPF (i.e. SlimDX, SharpDX, etc.) can you do sub-window-sized controls?

In our WPF application, we have a need to display about 64 real-time level meters for an audio application. The tests we've thrown at WPF, even when rendering basic primitives as efficiently as we can still show it to be nowhere near where our application needs to be, often times bogging down the main thread so much to the point that it's non-responsive to input.
As such, we have to go with something more optimized for graphics performance such as DirectX (via SlimDX or SharpDX) or OpenGL/ES (via Atlas which converts it to DirectX calls.)
My question is if it's possible to create multiple, small DirectX-based areas, each representing an individual meter, or for that matter, is that even the right approach? I was under the understanding that you have to run it as at a minimum, the entire window, not a portion thereof.
The issues I see with the latter are airspace issues wherein you can't have WPF content in front of DirectX content in the same window, and we really don't want to have to redo all of our controls in DirectX since for the other non-meter 95% of our UI WPF is great!
I have read that you can render DirectX to a brush, then use that inside WPF, or using the WriteableBitmap class which gives you direct access to the buffers WPF then uses in its Render thread, both of which don't seem to suffer from the Airspace issues, but that seems we'd be right back at the same place with WPF being the bottleneck since it still has to do the rendering.
We are of course going to dedicate a few weeks to sample applications testing all of the above, but I'm wondering if I'm even headed in the right direction, and/or if there are any caveats we can avoid by talking to people with experience doing something like this to avoid common pitfalls, etc. As such, any comments will be appreciated.
I'm hoping we can perhaps even start a wiki somewhere to discuss this topic as it seems to be a popular one, albeit spread all over the place making it hard for new entrants to get the information they seek.
With wpf / d3d interop, You should always try to create the smallest number of interop calls. So you should prefer rendering all 64 level meters in a single render target (also it allows you to batch your primitive rendering and draw everything in the smallest number of gpu calls).
you should try to use the D3DImage API that allows you to share your own D3D texture with the wpf renderer.
If WPF can't really handle these 64 moving bars, you could go with a single D3DImage and use Direct3D9 for rendering all bars at once directly to it. For your specific scenario, you shouldn't have any performance problem.

How should I implement non-event-based actions in WPF?

I come from a couple years' background in looped game programming. I'm very used to having a constant loop in my application which continually calls functions like Update and Draw, allowing me to perform actions like animations over time by incrementing values a bit each frame.
Now that I've got a job involving WPF, though, I find that I was too reliant on that system. Maybe I've got a limited feel for WPF, but it seems like everything is event-based. User clicks a button, you inform the code, the code manipulates values. The values change, code informs UI, UI updates layout. It works well for GUI-based application programming but I find that when I encounter situations which would be trivial in loop-based game programming I am stuck, unable to find a good way to implement simple behaviors.
At the risk of being too vague I'll provide my current problem as an example. After Windows 8 was unveiled I became very enamored with the idea of Semantic Zoom. After playing around with the Start Screen extensively I began working on a port of Semantic Zoom to WPF4.0 for Microsoft Surface (I work with the Surface at my job). I just want a trivial example of it which would allow me to use pinch gestures to navigate up and down in a stack of views.
After many hours spent trying to understand manipulation events (I won't go into that... bleh), I've finally got my view scaling based on a pinch gesture. If it scales past a certain point I jump back to the 'zoomed out' view. Pretty cool. But, the problem is, if the user doesn't complete the gesture and decides not to zoom out, I'm left with a smaller view. I want to animate the scale of the view to constantly 'rebound' from user pinching and restore to a scale of 1. I know if this were loop based I'd just Lerp toward 1 each frame. But since WPF is all based on events, I'm a little lost.
There's probably an answer to this specific problem using inertia or different manipulation events (and I'd be happy to hear it), but in addition I'd just like to know how I can re-orient my mindset to work more effectively in WPF. Is it just about knowing which events to subscribe to? Are there clever ways to use Animations to do what I want? Should I use threads to accomplish these kinds of tasks, or is that cheating (it seems unreliable, plus I'm shaky on threads in WPF)?
This issue is one of my biggest barriers to being effective in WPF, I think (well, this and not quite knowing MVVM yet, working on that). I'd like to see it torn down and be able to be effective in more than just loop-based games programming.
Although i'm pretty sure that most of what you want to do can be done in an event-based manner, you might want to take a look at the MSDN How to: Render on a Per Frame Interval Using CompositionTarget. Please also note the final section Per-Frame Animation: Bypass the Animation and Timing System in Property Animation Techniques Overview.

Is there a lightweight way to include GDI rendered content when printing with WPF?

One of the projects I work on has some pre-existing reports that are printed via MFC's printing support and rendered more or less directly to a printer DC via GDI. We've started doing some new (unrelated) reports via WPF/XAML since we're transitioning new UI to WPF anyway and it's so much better to work with for layout.
The other shoe has finally dropped, and I've got the need to add some new functionality to an existing printed report, and the new functionality practically begs to be implemented with WPF. Our existing WPF reports are implemented via XAML pages sent to an XpsDocument (in-memory, not on disk) via XpsDocumentWriter. I would like to be able to continue to use this strategy, and take the approach of writing WPF/XAML reports that happen to have some pages rendered via GDI.
My first naive attempt was to embed an HwndHost in the UIElement that gets rendered in the XpsDocumentWriter, but that doesn't seem to work. No surprise but it was worth a try.
The next obvious solution, IMO, would be to render the GDI graphics to an appropriate sized and scaled bitmap, and render that bitmap to a page in the XpsDocument. That would work, but page-sized bitmaps (especially in-memory ones) seem like a recipe for high memory usage and poor performance on slower computers.
Ideally I'd like to render the GDI content to a metafile or some other vector format that could then be translated to XPS. But this has to be an automatic process that works every time since it's just a document printing feature. OTOH it's an application for in-house users so we can put up with some performance degredation
WPF development is not my main task, so I'd describe myself as a novice without much detailed knowledge of the underlying details. I just wanted to make sure I'm not missing something obvious before I revert to using a bitmap as the transfer medium, although I haven't turned up any other decent options in my search so far.
Anything I should be looking into?
One way of doing this would be to create a WriteableBitmap in WPF and blit the GDI drawn image directly to it so it can be rendered in your XPS document. An initial step could be to do a straight blit from your GDI DC (get a pointer to GDI DC, pointer to WriteableBitmap and use Platform Invoke to call memcpy). Later work could involve converting the MFC GDI drawing to vanilla WPF (using a library such as WriteableBitmapEx which has gdi like drawing methods).
Although the first approach above would involve two bitmaps, its the best way I can currently think of without a huge re-write. The second method may or may not be possible out of the box, since WriteableBitmaps's drawing support is not as extended as GDI. A final method I just thought of would be to use GDI via Platform Invoke and draw directly on the WriteableBitmap surface. This would allow a port without a massive re-write and would give you the performance you need, while keeping the code familiar.

Is there a way to make .net winform tool tips behave less haphazerdly?

I find that the winform tool tips behave very erratically. They seem to randomly decide to do nothing, show up or disappear when I perform the same hovering/clicking/etc actions.
Is there some pattern that I'm missing? Do I just not understand the UI technique to bring up a tooltip? Is this a common problem? Do people actually expect tool tips to work this way?
Tooltips display automatically. That's a bit of a problem, the native Windows control has counter-measures in place to avoid displaying tips too often, potentially wearing out the user with info that has been shown frequently enough. Not exactly sure how that rate limiting is implemented, accumulated time is a factor (like 60 seconds), possibly also the number of times it was displayed.
The SDK docs do not document the implementation details. There is also no message available to forcibly reset the rate limiter. I do think that passing another control in the Show() method resets it.
All and all, it does mean that the ToolTip control is really only suitable to act as a traditional tool tip. It doesn't work well as a 'dynamic label'. which is your alternative, a Label control with BackColor = Info. Albeit it not quite the same because you cannot easily make it a top-level window.

WPF Architecture and Direct3D graphics acceleration

After reading the wikipedia article on WPF architecture, I am a bit confused with the benefits that WPF will offer me. (wikipedia is not a good research reference, but i found it useful). I have some questions
1) WPF uses d3d surfaces to render. However, the scenegraph is rendered into the d3d surface by the media integrated layer, which runs on the CPU. Is this true ?
2) I just found out by asking a question here that bitmaps dont use native resources. Does this mean that if i use alot of images, the MIL will copy each when rendering, rather than storing the bitmaps on the video card as a texture ?
3) The article mentions that WPF uses the painters algorithm which is back to front. Thats painfully slow. Is there any rational why WPF omits using Z-buffering and rendering front to back ? I am guessing its because the simplest way to handle transparency, but it seems weak.
The reason i ask is that i am thinking it wont be wise for me to put hundreds of buttons on a screen even though my colleagues are saying its directx accelerated. I dont quite believe that whole directx accelerated bit about WPF. I used to work on video games and my memory of writing d3d and opengl code tells me to be cautious.
For questions #1 and #3 you might want to check out this section of the SDK that discusses the Visual class and how it's rendering instructions are exchanged between the higher level framework and the media integration layer (MIL). It also discusses why the painters algorithm is used.
For #2, no that is most definitely not the case. The bitmap data will be moved to the hardware and cached there.
I tested that, I wrote two programs that show 1,000 buttons on screen, one in WinForms and one in WPF, both worked just fine.
I then pushed that up to 10,000 buttons, at that point the WPF app took a few seconds to start but run just fine, the WinForms app didn't start.
Win32 itself (and WinForms) isn't built for applications with hundreds of controls (believe me I wrote such an app), at some point it just stops working, WPF on the other hand, keeps working even if it slows down a bit at some point.
So, if you do need to put a lot of controls on screen WPF is your best bet (unless you want to roll your own UI framework - and you think you can do better than the entire MS perf team).
Also, WPF has many advantages other than graphics acceleration: richer graphics, drawing model that is easier to work with, animations, 3d and my personal favorite - amazing data-binding.
This will let you develop richer UIs faster - and I think that will make a much bigger difference than the painting algorithm used.
BTW, if you need to put hundreds of buttons on the screen this is likely to be a bad user experience and you may want to reconsider your UI design,

Resources