While I love developing user interfaces in WPF and XAML, I've tried to embrace the scalability aspect by also creating my icons as vector images... but it's really hard! I very rarely get the same kind of crispness that I can with raster graphics and it almost always takes me longer to produce the icons.
Am I wasting my time? Is there no benefit to making scalable icons? Or is there a setting somewhere in Windows that scales the UI for accessibility, thus making scalabilty important?
Would welcome your advice. :)
There are some advantages to using vector/scalable graphics in WPF. Off the top of my head:
You can build a high-fidelity UI that adapts to the user's DPI settings - see this blog post for more information
You can scale the images in the UI (e.g. use a ViewBox to stretch the icon), allowing for "zoomable" interfaces
The file size is greatly reduced, specially for larger images
You don't have to juggle different image sizes and resolutions
You can edit the images directly in Blend
One problem of this approach is that it might cause more stress to the CPU if the vector icons are not cached (To cache, set UIElement.CacheMode to a BitmapCache).
If you're 100% sure the icons will stay the same size, you can go with raster images safely - just do whatever you think is more productive in your case.
Related
In our WPF application, we have a need to display about 64 real-time level meters for an audio application. The tests we've thrown at WPF, even when rendering basic primitives as efficiently as we can still show it to be nowhere near where our application needs to be, often times bogging down the main thread so much to the point that it's non-responsive to input.
As such, we have to go with something more optimized for graphics performance such as DirectX (via SlimDX or SharpDX) or OpenGL/ES (via Atlas which converts it to DirectX calls.)
My question is if it's possible to create multiple, small DirectX-based areas, each representing an individual meter, or for that matter, is that even the right approach? I was under the understanding that you have to run it as at a minimum, the entire window, not a portion thereof.
The issues I see with the latter are airspace issues wherein you can't have WPF content in front of DirectX content in the same window, and we really don't want to have to redo all of our controls in DirectX since for the other non-meter 95% of our UI WPF is great!
I have read that you can render DirectX to a brush, then use that inside WPF, or using the WriteableBitmap class which gives you direct access to the buffers WPF then uses in its Render thread, both of which don't seem to suffer from the Airspace issues, but that seems we'd be right back at the same place with WPF being the bottleneck since it still has to do the rendering.
We are of course going to dedicate a few weeks to sample applications testing all of the above, but I'm wondering if I'm even headed in the right direction, and/or if there are any caveats we can avoid by talking to people with experience doing something like this to avoid common pitfalls, etc. As such, any comments will be appreciated.
I'm hoping we can perhaps even start a wiki somewhere to discuss this topic as it seems to be a popular one, albeit spread all over the place making it hard for new entrants to get the information they seek.
With wpf / d3d interop, You should always try to create the smallest number of interop calls. So you should prefer rendering all 64 level meters in a single render target (also it allows you to batch your primitive rendering and draw everything in the smallest number of gpu calls).
you should try to use the D3DImage API that allows you to share your own D3D texture with the wpf renderer.
If WPF can't really handle these 64 moving bars, you could go with a single D3DImage and use Direct3D9 for rendering all bars at once directly to it. For your specific scenario, you shouldn't have any performance problem.
When should we use images (jpg, png) and when should we use XAML in an application.
Image
+ "easy" for the designer to create what he wants
+ are displayed the same on every computer
- fixed resolution
XAML
+ vector format (resolution independent, resize able, ...)
+ can be animated
+/- rendered by the client
- not as many effects available as for images or are really complex to create
- complex visual tree
I could not find any source, that compares the resource usage (CPU, RAM) between images and XAML.
I personally think everything should be XAML, but I don't want to have an application that is slow as hell. Are there any good performance guidelines for using XAML drawings?
Researching this I've read that you should have everything in XAML and then use RenderTargetBitmap to create static images on demand, but according to this article it will cause the window to be rendered without hardware acceleration. So I'm wondering if it is really an improvement for performance. Ignoring the fact that it is much more work for the coder.
From your comment:-
I am only talking about the cases where image and xaml is interchangeable
Use a PNG, period. Only use Xaml based imagery when you actually need the advantages it provides. There may be some edge case exceptions, for example, a large image that can be composed from a couple of simple paths in Xaml. However you would also have to have a good reason to believe that any performance difference is appreciable and worth eliminating. Ultimately favor simplicity over complexity when the same results are achievable from both.
If your artist/designer can create vector graphics and there are no complex gradients, then I would prefer vector graphics. You get all advantages and no disadvantages.
And if you are concerned about complex visual trees, then WPF offers bitmap caching specifically for these kind of cases.
I created a simple web browser WPF test application with pictures and text within a canvas, with windows set at 96dpi.
Then I switched to 120 dpi and :-((( Display is messy, image size changed and part of the canvas is out of view...
When I used Winforms, I set the AutoScaleMode property to None and the windows keeps its size, the controls as well, the controls which have inherited font are properly displayed, not blurry and not too big...
What can I do to mimic this (good) behavior in W¨F?
I'm not clear on what you mean by "web browser WPF ... application". WPF doesn't run in a Web browser, unless you're talking about an XBAP. Or are you doing Silverlight? Or is it just a WPF navigation application and not browser-based at all? You'll need to clarify.
WPF automatically scales your content when you run in high-DPI modes. This is intended behavior: if the user explicitly says they want everything to be bigger on the screen, then WPF will respect the user's wishes. The old WinForms hacks of "pretend high-DPI doesn't exist, just show everything at the normal small size and hope it doesn't piss the user off too much" aren't available in WPF; you could probably emulate them if you worked at it, but you're steered very strongly toward doing the Right Thing.
WPF scales everything, so your statement that "part of the canvas is out of view" doesn't make sense. It should be scaling the canvas, its parent window, and its child elements all by the same amount, so if everything fits at 96dpi, it should also fit at 120dpi and 144dpi. If not, then you're doing something strange and you'll have to provide a code sample that reproduces the problem.
You seem to be claiming that fonts are blurry when you run in a high-DPI mode, which sounds very strange. Fonts are rendered as vectors, so they should scale cleanly, and render crisply even in high-DPI modes. I've never seen the blurry fonts you describe, so again, you'll have to provide a repro case.
The only thing that I would expect to be blurry are images. If you're using raster (bitmap) images (BMP / GIF / JPG / PNG) in your UI -- for example, for the icons on a toolbar -- then yes, those will look pretty bad when they're scaled. It pretty much always looks bad when you take a small bitmap and make it larger. You might try working around this by using larger images and sizing them down for display -- for example, if you want your toolbar images to be 16x16 (when in standard 96-dpi mode), then you could try putting a 32x32 bitmap in your project, setting the Image element's Width="16" and Height="16" in your XAML, and seeing if that looks any better. It would actually be 20x20 physical pixels in 120dpi mode, and 24x24 in 144dpi mode, both of which would still be scaled down from the 32x32 resource and would therefore have a better shot of looking good than a 16x16 source image that's had to be scaled up. (I haven't tried this technique in a WPF toolbar, though, so I don't know how well it would really work in practice with typical toolbar images.)
The very best way to get around the problems with scaling images would be to use vector images instead of raster. Unfortunately, it's hard to find libraries of vector images. They're few, far between, typically less comprehensive than what you can find for bitmap images, and often expensive.
Presumably you use fixed length units (px). Try re-layouting your project keeping the WPF layout rules in mind. This page has some best practices for that.
I just found a bug using MaxHeight under WPF in .NET 4, set in a Style that gets inherited by another Style and that is used as a StaticResource, which didn't get influenced by the DPI set by the user. I set it from MaxHeight to Height, then it got influenced by the DPI. I suspect a bug in the .NET 4 (and possibly other frameworks) here.
I'm looking for an Image control for WPF which can rapidly change images. The built in WPF one is quite slow for the image sizes im using (scaled). I only need about ~3 FPS. I have considered dropping to WinForms and even D3D but I'm not sure thats the best way.
Can anyone suggest something?
WPF's Image control uses the native "Windows Imaging" and Direct3D subsystems of Windows to do all its dirty work, so if used with the right paremeters it will be pretty much as fast as anything you will find.
I suspect the problem is that your settings are causing Windows Imaging load the image at full resolution, then having Direct3D scale it. The solution to this is to do the scaling as you load the image by setting DecodePixelHeight and DecodePixelWidth on the BitmapImage you are using as an ImageSource.
Another technique that many graphics apps use to speed things up is to preload the images in the background. For example, the Windows picture viewer automatically starts loading the next image as soon as the current image is shown.
If you are preloading images, consider doing it in a separate thread. Also make sure you use BitmapCacheOption.OnLoad when you create the BitmapImage or the preloading won't actually occur (the default is OnDemand).
After reading the wikipedia article on WPF architecture, I am a bit confused with the benefits that WPF will offer me. (wikipedia is not a good research reference, but i found it useful). I have some questions
1) WPF uses d3d surfaces to render. However, the scenegraph is rendered into the d3d surface by the media integrated layer, which runs on the CPU. Is this true ?
2) I just found out by asking a question here that bitmaps dont use native resources. Does this mean that if i use alot of images, the MIL will copy each when rendering, rather than storing the bitmaps on the video card as a texture ?
3) The article mentions that WPF uses the painters algorithm which is back to front. Thats painfully slow. Is there any rational why WPF omits using Z-buffering and rendering front to back ? I am guessing its because the simplest way to handle transparency, but it seems weak.
The reason i ask is that i am thinking it wont be wise for me to put hundreds of buttons on a screen even though my colleagues are saying its directx accelerated. I dont quite believe that whole directx accelerated bit about WPF. I used to work on video games and my memory of writing d3d and opengl code tells me to be cautious.
For questions #1 and #3 you might want to check out this section of the SDK that discusses the Visual class and how it's rendering instructions are exchanged between the higher level framework and the media integration layer (MIL). It also discusses why the painters algorithm is used.
For #2, no that is most definitely not the case. The bitmap data will be moved to the hardware and cached there.
I tested that, I wrote two programs that show 1,000 buttons on screen, one in WinForms and one in WPF, both worked just fine.
I then pushed that up to 10,000 buttons, at that point the WPF app took a few seconds to start but run just fine, the WinForms app didn't start.
Win32 itself (and WinForms) isn't built for applications with hundreds of controls (believe me I wrote such an app), at some point it just stops working, WPF on the other hand, keeps working even if it slows down a bit at some point.
So, if you do need to put a lot of controls on screen WPF is your best bet (unless you want to roll your own UI framework - and you think you can do better than the entire MS perf team).
Also, WPF has many advantages other than graphics acceleration: richer graphics, drawing model that is easier to work with, animations, 3d and my personal favorite - amazing data-binding.
This will let you develop richer UIs faster - and I think that will make a much bigger difference than the painting algorithm used.
BTW, if you need to put hundreds of buttons on the screen this is likely to be a bad user experience and you may want to reconsider your UI design,