How should I implement non-event-based actions in WPF? - wpf

I come from a couple years' background in looped game programming. I'm very used to having a constant loop in my application which continually calls functions like Update and Draw, allowing me to perform actions like animations over time by incrementing values a bit each frame.
Now that I've got a job involving WPF, though, I find that I was too reliant on that system. Maybe I've got a limited feel for WPF, but it seems like everything is event-based. User clicks a button, you inform the code, the code manipulates values. The values change, code informs UI, UI updates layout. It works well for GUI-based application programming but I find that when I encounter situations which would be trivial in loop-based game programming I am stuck, unable to find a good way to implement simple behaviors.
At the risk of being too vague I'll provide my current problem as an example. After Windows 8 was unveiled I became very enamored with the idea of Semantic Zoom. After playing around with the Start Screen extensively I began working on a port of Semantic Zoom to WPF4.0 for Microsoft Surface (I work with the Surface at my job). I just want a trivial example of it which would allow me to use pinch gestures to navigate up and down in a stack of views.
After many hours spent trying to understand manipulation events (I won't go into that... bleh), I've finally got my view scaling based on a pinch gesture. If it scales past a certain point I jump back to the 'zoomed out' view. Pretty cool. But, the problem is, if the user doesn't complete the gesture and decides not to zoom out, I'm left with a smaller view. I want to animate the scale of the view to constantly 'rebound' from user pinching and restore to a scale of 1. I know if this were loop based I'd just Lerp toward 1 each frame. But since WPF is all based on events, I'm a little lost.
There's probably an answer to this specific problem using inertia or different manipulation events (and I'd be happy to hear it), but in addition I'd just like to know how I can re-orient my mindset to work more effectively in WPF. Is it just about knowing which events to subscribe to? Are there clever ways to use Animations to do what I want? Should I use threads to accomplish these kinds of tasks, or is that cheating (it seems unreliable, plus I'm shaky on threads in WPF)?
This issue is one of my biggest barriers to being effective in WPF, I think (well, this and not quite knowing MVVM yet, working on that). I'd like to see it torn down and be able to be effective in more than just loop-based games programming.

Although i'm pretty sure that most of what you want to do can be done in an event-based manner, you might want to take a look at the MSDN How to: Render on a Per Frame Interval Using CompositionTarget. Please also note the final section Per-Frame Animation: Bypass the Animation and Timing System in Property Animation Techniques Overview.

Related

Is dynamically attaching/detaching WPF behaviors a valid approach to my problem?

Short Version:
Is dynamically attaching/detaching WPF behaviors to/from a control at runtime a feasible practice in WPF or should I be looking for something different to solve my problem?
Long version:
I have a WPF control which represents a drawing surface. The user can use any one of a number of mouse "tools". One tool draws a line, one draws a polyline, one merely selects exiting items, etc. etc.
I handle this in code-behind for mouse events. Unfortunately no matter how much I try to generalize it, there is a lot of switching on tool type because the tools do very different things. Consequently, it is difficult to maintain. Adding new tools requires too many edits and testing. The handlers keep on growing larger.
I need to make this control relatively easy for the next developer on this project to understand it and to add a new tool without worrying about breaking anything.
It would seem that WPF behaviors would provide a natural way to simplify this and make it more modular; You set a particular "tool behavior" on the control and it handles the only mouse events it needs, altering the control properties as necessary. Different tool code is no longer mixed together.
But this approach differs from the way I've used WPF behaviors in the past. It would require me be able to dynamically attach/detach behaviors at run time with the push of a button. That is new to me. Usually I just have a simple behavior that I declare on a control once in XAML and it stays that way for the lifetime of the control. In fact, all of the WPF Behavior examples I've ever seen have been the sort where you set up the behavior one time in XAML and forget about it.
So I'm wondering has anyone out there done something like this? Is this an approach I should pursue or would it probably end up proving unwieldy? Or should I be looking for a different solution?

When using a DirectX-based API with WPF (i.e. SlimDX, SharpDX, etc.) can you do sub-window-sized controls?

In our WPF application, we have a need to display about 64 real-time level meters for an audio application. The tests we've thrown at WPF, even when rendering basic primitives as efficiently as we can still show it to be nowhere near where our application needs to be, often times bogging down the main thread so much to the point that it's non-responsive to input.
As such, we have to go with something more optimized for graphics performance such as DirectX (via SlimDX or SharpDX) or OpenGL/ES (via Atlas which converts it to DirectX calls.)
My question is if it's possible to create multiple, small DirectX-based areas, each representing an individual meter, or for that matter, is that even the right approach? I was under the understanding that you have to run it as at a minimum, the entire window, not a portion thereof.
The issues I see with the latter are airspace issues wherein you can't have WPF content in front of DirectX content in the same window, and we really don't want to have to redo all of our controls in DirectX since for the other non-meter 95% of our UI WPF is great!
I have read that you can render DirectX to a brush, then use that inside WPF, or using the WriteableBitmap class which gives you direct access to the buffers WPF then uses in its Render thread, both of which don't seem to suffer from the Airspace issues, but that seems we'd be right back at the same place with WPF being the bottleneck since it still has to do the rendering.
We are of course going to dedicate a few weeks to sample applications testing all of the above, but I'm wondering if I'm even headed in the right direction, and/or if there are any caveats we can avoid by talking to people with experience doing something like this to avoid common pitfalls, etc. As such, any comments will be appreciated.
I'm hoping we can perhaps even start a wiki somewhere to discuss this topic as it seems to be a popular one, albeit spread all over the place making it hard for new entrants to get the information they seek.
With wpf / d3d interop, You should always try to create the smallest number of interop calls. So you should prefer rendering all 64 level meters in a single render target (also it allows you to batch your primitive rendering and draw everything in the smallest number of gpu calls).
you should try to use the D3DImage API that allows you to share your own D3D texture with the wpf renderer.
If WPF can't really handle these 64 moving bars, you could go with a single D3DImage and use Direct3D9 for rendering all bars at once directly to it. For your specific scenario, you shouldn't have any performance problem.

Gtk+/X11: semitransparent windows with and without composite managers?

I need some code for making my window (and preferably all widgets on it) semitransparent.
I know i can play around with gtk_window_set_opacity(), but it works only when composite manager is running, but what if not?
I've googled a lot, found lots of code that mostly doesn't even compile, doesn't work or just a proof of concept. No fulfilling solution. I don't want to mess with X11 Xlib awful API (I just don't have time to learn it).
Where to get such library/code snippet?
There's no good answer to this (which is a good part of why compositing managers were invented). If you could already do this, people wouldn't have invented the whole compositing manager mechanism.
The only sort-of answer, used in old "transparent terminals" and the like, is based on making screenshots of the stuff underneath the window and then painting the screenshot in your own window. This is an Xlib-involving mess, hard to get mostly right, impossible to get completely correct, and inefficient. Still, you could do it perhaps. Look at old revisions of terminals supporting transparency, I think VTE used to have this code, ZVT widget certainly did. So did the Enlightenment terminal for example.
But really the way to go is to just fall back to no transparency for users without a CM.
While modern X11 servers do support RGBA visuals this doesn't mean, they'll do alpha blending. X11 operates on the model, that a window is a mask on a single shared framebuffer. Z ordering may clip parts of a window so these areas are not drawn to at all.
To enable transparency a compositing manager must redirect the windows to off-screen rendering, then compose the final image you see on the screen from those off-screen rendered parts. The XDamage extension is used to keep track of which windows need re-compositing.

Is there a way to make .net winform tool tips behave less haphazerdly?

I find that the winform tool tips behave very erratically. They seem to randomly decide to do nothing, show up or disappear when I perform the same hovering/clicking/etc actions.
Is there some pattern that I'm missing? Do I just not understand the UI technique to bring up a tooltip? Is this a common problem? Do people actually expect tool tips to work this way?
Tooltips display automatically. That's a bit of a problem, the native Windows control has counter-measures in place to avoid displaying tips too often, potentially wearing out the user with info that has been shown frequently enough. Not exactly sure how that rate limiting is implemented, accumulated time is a factor (like 60 seconds), possibly also the number of times it was displayed.
The SDK docs do not document the implementation details. There is also no message available to forcibly reset the rate limiter. I do think that passing another control in the Show() method resets it.
All and all, it does mean that the ToolTip control is really only suitable to act as a traditional tool tip. It doesn't work well as a 'dynamic label'. which is your alternative, a Label control with BackColor = Info. Albeit it not quite the same because you cannot easily make it a top-level window.

Where to learn proper way to use Silverlight (or WPF)

Approaching Silverlight development is a rather daunting task as it seems to require a rather different mindset to work I have done in the past.
I have been working on it for several months and we have already released an application that presents form-based pages. So I have the basics of XAML for layout but what I need to do now is move into graphically representing data. For example transform a list of objects representing vehicle speed recordings into a line graph of speed. I am at a loss on what the best way is to approach this.
Can anyone point me to articles or tutorials that present this kind of thing?
Your first port of call for Silverlight learning should be the official site http://silverlight.net/Learn/
If you want to do any data visualization/charting then first try the Silverlight Toolkit on codeplex. It's fantastic if you want to get anything up and running quickly.
Also check out Delay's Blog on charting and the chartbuilder code
Bang your head against it for 3-6 months. That's how I did it and it's worked out pretty well so far.
But seriously, the learning curve sucks.
There's charting libraries for Silverlight out there, you could grab one of those but I wouldn't waste money on it. It's relatively easy to write this kind of code yourself.
All you really need is a DrawingVisual. Once you have that you can render what you need on to it's surface. The trick is to make sure that you have sufficient layout information when you render. Because this is vector graphics, you can use the ScaleTransform to match your content bounds instead of repainting on size changed. Other than that, you'll wanna host your DrawingVisual in a UIFrameworkElement and let the dimension of that object govern where and how you draw your data. This will give you all the layout goodness of WPF/Silverlight.
For drawing there are plenty of Geometry classes you can rely on but there's one thing that you'll wanna do and that's to adjust the level of detail in your data points with respect to your drawing. This is the number one trick to make sure you don't hog the CPU.
Avoid drawing more than one data point per pixel. If you have a lot of data points, and a small drawing surface you can use a rolling average to smooth the result.
If you approach this with the above things in mind you should be able to write a flexible graph UI element that you can visualize data with, in no time at all.
I did this in a WPF application, I'm pretty much assuming that you can do the exact same thing with Silverlight 2.0, you'll just yell at me if you cant?

Resources