I have a ListView with different complex ListViewItem containers consisting of images, shadow effects, blur effects.. etc. Rendering these containers in large amounts heavily reduces performance, especially since i'm using a blur overlay frame on top of the ListView. Which is why in this case i'm setting CacheMode to BitmapCache (improves performance by up to 15x the fps).
<Border.CacheMode>
<BitmapCache />
</Border.CacheMode>
The problem is that i use a WrapPanel and a ValueConverter to dynamically resize and fit these containers into the WrapPanel so that they completely fill the space in either horizontal or tile views. Apparently, that doesn't work well with caching and it produces severe lags/stalls (frame drops to 0).
Currently my 3 options are:
Disable caching (and live with almost 15fps)
Disable resizing (looks unacceptable)
Disable caching and resizing only when resizing window (still performs bad but it's the best option i have)
My questions:
Why do i get these huge drops in performance while resizing with caching vs without caching?
Am i misusing caching or doing it wrong?
Is there a better way of fixing this mess without compromises?
Ok. After a lot of reading and experimenting, i figured two things:
First, Caching should not be used on elements that resize frequently, especially if they're too many (couldn't find out why). So, i basically cached fixed size child elements instead.
And that reminded me of Virtualization which was exactly what i was missing but didn't know was supported in WPF Lists.
With some more optimizations, now, i can resize the window buttery smooth.
Related
I'm working on a form that shows many our company's products in a FlowLayout, but on some categories that hold many products, performance in scrolling is noticeably affected. I switched to a List so I could leverage the performance benefits of using a renderer, but now I'm not happy with the layout since there's a lot of wasted space, especially if the device is in landscape mode.
My next thought was to use a Table, which I believe also uses renderers to optimize the display of its data; but to mimic a FlowLayout, I'd need to get the preferred width of some placeholder component, then divide the container's width by that to get the number of columns, and then fill the model with that number of columns in mind. I'd also need to change all that if the device changes orientation.
Before I go down that rabbit hole, I'm wondering if I'm making things unnecessarily complicated for myself and if there's already something that I can use to achieve that goal. So to summarize, what would be the most efficient way to display data (that would be shown as buttons) sequentially from left to right, and top to bottom?
I wouldn't use FlowLayout for anything serious although I doubt its the reason for your performance issues, those probably relate to something else. There is a performance how do I video which is a bit old but mostly still relevant: http://www.codenameone.com/how-do-i---improve-application-performance-or-track-down-performance-issues.html
In design terms flow layout is hugely problematic since the elements are not aligned properly thus producing a UI that doesn't look good when spanning multiple rows. I suggest using a grid layout which has a mode called auto fit. By using setAutoFit(true) on a grid of even 1x1 all the elements will take up all available space uniformly based on screen size and adapt with orientation changes.
I have a WPF application showing a large image (1024x1024) inside a ZoomableCanvas. On top of the image, I have some 400 ellipses. I'm handling touch events by setting IsManipulationEnabled="True" and handling the ManipulationDelta event. This works perfectly when zooming slowly. But as soon as I start zooming quickly, there is a sudden frame-rate drop resulting in a very jumpy zoom. I can hear the CPU fan speeding up when this occurs. Here are some screenshots from the WPF Performance Suite at the time when the frame-rate drops:
Perforator
Visual Profiler
Software renderer kicks in?
I'm not sure how to interpret these values, but I would guess that the graphics driver is overwhelmed by the amount of graphics to render, causing the CPU to take over some of the job. When zooming using touch, the scale changes by small fractions. Maye this has something to do with it?
So far, I have tried a number of optimization tricks, but none seem to work. The only thing that seems to work is to lower the number of ellipses to around 100. That gives acceptable performance.
Certainly this is a common problem for zoomable applications. What can I do to avoid this sudden frame-rate drop?
UPDATE
I discovered that e.DeltaManipulation.Scale.X is set to 3.0.. in the ManipulationDelta event. Usually, it is around 1.01... Why this sudden jump?
UPDATE 2
This problem is definitely linked to multi-touch. As soon as I use more than one finger, there is a huge performance hit. My guess is that the touch events flood the message queue. See this and this report at Microsoft Connect. It seems the sequence Touch event -> Update bound value -> Render yields this performance problem. Obviously, this is a common problem and a solution is nowhere to be found.
WPF gurus out there, can you please show how to write a high performance multi-touch WPF application!
Well I think you've just reached the limits of WPF. The problem with WPF is that it tesselates (on CPU) vertex grafics each time it is rendered. Probably to reduce video memory usage. So you can imagine what happens when you place 600 ellipses.
If the ellipses are not resized then you could try to use BitmapCache option. In this way ellipses will be randered just once in the begining and then will be stored as textures. This will increase memory usage but should be ok I think.
If your ellipses are resized then previous technic won't work as each ellips will be rerendered when resized and and it will be even slower as this will rewrite textures (HW IRTs in perforator).
Another possibility is to design special control that will use RenderTargetBitmap to render ellipses to bitmaps and then will render it through Image control. In this way you can control when to render ellipses you could even render them in parralel threads (don't forget about STA). For example you can update ellipse bitmaps only when user interaction ends.
You can read this article about WPF rendering. I don't agree with the author who compares WPF with iOS and Android (both use mainly bitmaps compared to WPF). But it gives a good explanation about how WPF performs rendering.
I currently work in a customer assignment related to performance problems in a WPF rich client LOB application.
The problem is that the application runs very slow/sluggish. Especially data table handling (scrolling, sorting, selection) is extremely slow and leaves the application unusable.
I analyzed the system state when a single tab containing a few textboxes, comboboxes and labels is opened and left idle (waiting for user input).
These are my findings:
All the rendering is calculated on the GPU
There are no performance heavy features such as animations, bitmap effects, transparency, etc.
When the tab is idle (only the cursor is blinking in the focused textbox, the rest of the tab is static and does not even contain any data) the GPU runs up to 90%
GPU drops to 0 whenever the tab loses focus
GPU percentage directly relates to the window size. A small window brings it down to a few percent, full screen makes it go up to almost 100%
WPF Perforator tells me that WPF calculates the dirty region for the entire tab instead of only the blinking cursor
WPF Perforator reports dirty rect update rates larger than 20/sec on the idle tab and they directly correlate to GPU usage
My conclusion:
During development a lot of custom code (layout, event handling, etc..) has been introduced in order to fit WPF to the backend-driven architecture of the system as a whole. My guess is that due to some of the custom code the dirty-rect-mechanism of WPF has been broken. This leads to too much drawing activity and thus very high GPU usage. These innecessary activities lead to the problems described above.
Now I am looking for any advice where I should start my investigation. Or in other words: What are typical mistakes that a developer can make in order to break the WPF dirty-rect update algorithm. Any input is highly appreciated.
Many thanks and best regards!
Manuel
Thanks for the input. Let me clarify backend-driven: The UI is highly dynamic. A message from the backend defines the structure of the ui and the data to be displayed. Therefore, we do not have any xaml for the structure of the tabs, only c#.
In the meantime, I could solve the problem. I used Snoop and collapsed every element one by one while monitoring GPU usage. I found out that there was a very tiny pixelshader effect (DropShadowEffect) on one of the borders. As soon as I removed the effect, the GPU dropped from 80% to 1%. WPF drew correct dirty rectangles over small portions of the UI. Problem solved, case closed.
Things that seem interesting to me:
1. The tremendous impact that this small effect has on the GPU usage.
2. That it breaks the dirty-rect calculation.
3. Since it was not a BitmapEffect but a PixelshaderEffect I could not reveal it by disabling BitmapEffects in Perforator.
Thanks!
MM
I've written a simple game-like app in WPF. The number of objects drawn is well within WPF capabilities - something like a few hundred ellipses and lines with simple fills. I have a DispatcherTimer to adjust the positions of the objects from time to time (1/60th of a second).
The code to compute the new positions can be quite intensive when there are lots of objects, and can fully load a processor. Whenever this occurs, WPF starts skipping frames, presumably trying to compensate for the "slowness" of my application.
What I would much rather happen is for all the frames to be drawn anyway, only slower. The dropped frames do not add any speed - because visual updates were pretty quick anyway.
Can I somehow force WPF to have my changes to the visuals be reflected on the screen regardless of whether WPF thinks it's a good idea?
Unfortunately I don't think there's anything you can do about this, although I will happily be corrected! WPF is designed to be an application creation framework, not a games library, so it priortises application performance and "usability" over framerate. This actually works very well when producing applications as it allows you to use quite rich animations and effects while maintaining perceived performance on lower end systems.
The only thing I think you might be able to try is push your movement code's Dispatcher priority down slightly to below Render (Loaded is the next one down) using something like:
this.Dispatcher.BeginInvoke(DispatcherPriority.Loaded, MoveMyStuff);
I don't have any kind of test harness to verify if that will help though.
This issue was fixed by using a Canvas with an OnRender override instead of creating and moving UIElements. This does mean that everything needs to be drawn by hand in OnRender, but it can now run at any FPS consistently, without skipping any frames.
I always use a Canvas when I'm laying out my visuals usually because I will need adjust the RenderTransform.TranslateTransform to animate in some way. A colleague recently told me that unless I explicitly need to animate I should always use the A Stackpanel because it is faster than a RenderTransform.TranslateTransform when laying out objects to the visual.
Is this true?
Anyone have any data either way?
I don't have any data on this, but if we're just talking about stacking then you using a TranslateTransform to achieve the exact positioning of each item seems extremely fragile since the item could theoretically be of different heights/widths which could also theoretically change dynamically at runtime not to mention if the designer changes them by hand they have to redo the translate transform for N other UI elements. Using StackPanel means the Measure/Arrange phases will occur and no matter what size the items are they will be laid out precisely.