efficiency of multiple canvas size increases in silverlight - silverlight

I'm writing a silverlight app which does some real-time charting. Basically, I just have some polylines overlaid on a canvas. The user can record data for arbitrary amounts of time, and so the width of the canvas is increasing as necessary. Since the canvas is wrapped inside a scrollviewer, it can get quite large. Haven't seen any problems so far, but I haven't implemented the more computationally cumbersome pieces yet, so I'm trying to assess whether this approach is going to cause problems eventually.
Can anyone comment on the efficiency of this approach?
What are some tools/methods that I can use to assess efficiency?
Any other relevant information (I'm not an SL guru)?
Thanks

The canvas is just a coordinate-space for the elements that it contains -- there's no underlying bitmap, so increasing its size has no effect on performance or memory consumption.
You need only be concerned with the number of elements in the canvas, and in particular the number (and complexity) of elements in the visible area.

Related

What is the most efficient method of displaying many components using the FlowLayout?

I'm working on a form that shows many our company's products in a FlowLayout, but on some categories that hold many products, performance in scrolling is noticeably affected. I switched to a List so I could leverage the performance benefits of using a renderer, but now I'm not happy with the layout since there's a lot of wasted space, especially if the device is in landscape mode.
My next thought was to use a Table, which I believe also uses renderers to optimize the display of its data; but to mimic a FlowLayout, I'd need to get the preferred width of some placeholder component, then divide the container's width by that to get the number of columns, and then fill the model with that number of columns in mind. I'd also need to change all that if the device changes orientation.
Before I go down that rabbit hole, I'm wondering if I'm making things unnecessarily complicated for myself and if there's already something that I can use to achieve that goal. So to summarize, what would be the most efficient way to display data (that would be shown as buttons) sequentially from left to right, and top to bottom?
I wouldn't use FlowLayout for anything serious although I doubt its the reason for your performance issues, those probably relate to something else. There is a performance how do I video which is a bit old but mostly still relevant: http://www.codenameone.com/how-do-i---improve-application-performance-or-track-down-performance-issues.html
In design terms flow layout is hugely problematic since the elements are not aligned properly thus producing a UI that doesn't look good when spanning multiple rows. I suggest using a grid layout which has a mode called auto fit. By using setAutoFit(true) on a grid of even 1x1 all the elements will take up all available space uniformly based on screen size and adapt with orientation changes.

2D CAD application in WPF

I'm trying to write an CAD-like application in WPF(.NET 4.0) that needs to be able to display a lot of 2D points/lines. It will be used to display CAD-plans of entire cities with zoom, pan, rotate and point snapping on mouseover.
Right now I purely use WPF. I read the objects from the CAD file draw them into a StreamGeometry, use it as stroke of a new Path and add it to a Canvas, with several transforms.
My problem is that this solution doesn't scale well enough. It works fine with small CAD-files, but when I want to display like half a city(with houses and land boundaries) it is very very delayed.
I also tried to convert my CAD-file to an image, but
- a resolution a 32000x32000 is sometimes not enough
- when zooming out the lines are too thin.
In the end I need to be able to place this on a Canvas(2D/3D) as background.
What are my best options here?
Thanks,
Niklas
wpf is not good for a large 3d models. im afraid it is too slow. Your best bet is direct 3d or openGL
However, even with the speed of direct3d,openGL you will still need to work out how to cull as many polygons/vertices as possible before the rendering of the scene if you are trying to show an entire city.
there is a large amount of information on this (generally under game development)
there are a few techniques including frustrum culling, near and far plane culling.
also, since you probably have a static scene you may be able to use binary spacial partitioning.
As I understand the subject is 2D CAD system within WPF.
Great! I use it...
OpenGL and DirectX are in infinite loop OnDraw always. The CPU works all the time.
WPF/Silverlight 2D is smart model.
Yes, total amount of elements (for example, primitives inherited from Shape) must be not so much. But how many?
I tested own app (Silverlight). WPF will be a bit faster I hope...
Here my 2D CAD results. Performance is still great. Each beam consists of multiple primitives.
Use a VirtualCanvas like this one from Chris Lovett.

Is it better to use bitmaps or Xaml for Graphic elements

I am looking to style my application with some graphic elements. Icons and other thinks.
Is it better from performance and best practise point of view to use vektor graphics (XAML) or turn my graphic to a PNG?
And Why?
I am aware of the fact that vector graphics are scaleable... this is just a performance questions on large xaml based apps.
You have to weigh your own needs. If it's solely performance, then I'd say that depends on the number of images. If they're a large number then XAML would indeed be more performant, otherwise it would be negligible.
But I have to say for sheer maintainability, especially since you're talking about icons and such, you're far better off with bitmaps and I'll tell you why. Anyone and their brother can edit an icon. You can't say the same with vector graphics. If you want to replace your icons at some point, you simply replace the image. You don't have to go through the hassle of either creating and/or finding vector images and then (most likely) having to convert them to XAML through an export filter. Additionally, there are literally millions of CC licensed icons in bitmap form that you could use for nothing more than attribution.
Yes, there are some hassles with bitmaps (such as some quirks dealing with the ActualWidth/ActualHeight) from time-to-time, but those are minor, in my opinion.
ADDED: Yi-Lun Luo from Microsoft stated vectors are faster back in 2008. With the release of version 3 in 2009, Silverlight has taken advantage of the GPU which makes vectors even faster, if you enable it as well as if you also use BitmapCache. So on from a purely performance standpoint, vectors would be faster, theoretically.
Advantages of XAML over PNG:
Scaling - XAML drawings are made of vectors so are able to scale. Scaling beyond a factor 2 can cause issues (rounding off errors when scaling down and too little detail when scaling up).
Dynamic coloring/animations - You can easily manipulate the colors and points or even curves in the drawing using animations.
Advantages of PNG over XAML:
Speed in loading/caching - PNG can be cached on the GPU. Never more bytes on disk than 4 bits per pixel (+ some overhead)
Pixel perfect - what the designer draws is shown in the app. This is a lot harder when using vectors.
You pick depending on your needs and measurements of performance, load and files sizes.

Determining if a polygon is inside the viewing frustum

here are my questions. I heard that opengl ignores the vertices which are outside the viewing frustum and doesn't consider them in rendering pipeline. Recently I ran into a same post that said you should check this your self and if a point is not inside, it is you duty to find out not opengl's! Now,
Is this true about opengl? does it understand if a point is not inside, and not to render it?
I am developing a grass scene which has about 4000 grasses on rectangles. I have awful FPS, and the only solution I came up was to decide which grasses are inside the viewport and then only render them! My question here is that what solution is best for me to find out which rectangle is not inside or which one is?
Please consider that my question is not about points mainly but about rectangles. Also I need to sort the grasses based on their distance, so it is better if native on client side memory.
Please let me know if there are any effective and real-time ways to find out if any given mesh is inside or outside the frustum. Thanks.
Even if is true then OpenGL does not show polygons outside the frustum ( as any other 3d engines ) it has to consider them to check if there are inside or not and then fps slow down. Usually some smart optimization algorithm is needed to avoid flooding the scene with invisible objects. Check for example BSP trees+PVS or Portals as a starting point.
To check if there is some bottleneck in the application, you can try with gDebugger. If nothing is reasonable wrong optimizing in order to draw just the PVS ( possible visible set ) is the way to go.
OpenGL won't render pixels ("fragments") outside your screen, so it has to clip somehow...
More precisely :
You submit your geometry
You make a Draw Call (glDrawArrays or glDrawElements)
Each vertex goes through the vertex shader, which computes the final position of the vertex in camera space. If you didn't write a vertex shader (=old opengl), the driver will create one for you.
The perspective division transforms these coordinates in Normalized Device Coordinates. Roughly, its means that the frustum of your camera is deformed to fit in a [-1,1]x[-1,1]x[-1,1] box
Everything outside this box is clipped. This can mean completely discarding a triangle, or subdivide it if it is across a clipping plane
Each remaining triangle is rasterized into fragments
Each fragment goes through the fragment shader
So basically, OpenGL knows how to clip, but each vertex still has to go through the vertex shader. So submitting your entire world will work, of course, but if you can find a way not to submit everything, your GPU will be happier.
This is a tradeoff, of course. If you spend 10ms checking each and every patch of grass on the CPU so that the GPU has only the minimal amount of data to draw, it's not a good solution either.
If you want to optimize grass, I suggest culling big patches (5m x 5m or so). It's standard AABB-frustum testing.
If you want to optimize a more generic model, you can investigate quadtree for "flat" models, octrees and bsp-trees for more complex objects.
Yes, OpenGL does not rasterize triangles outsize the viewing frustrum. But, this doesn't mean that this is optimal for applications: OpenGL implementation shall transform the vertex coordinate (by using fixed pipeline or vertex shaders), then, having the normalized coordinates it finally knows whether the triangle lie inside the viewing frustrum.
This mean that no pixel is rasterized in that cases, but the vertex data is processed all the same; simply doesn't produce fragments derived from a non visible triangle!
The OpenGL extension ARB_occlusion_query may help you, but in the discussion section make it clear:
Do occlusion queries make other visibility algorithms obsolete?
No.
Occlusion queries are helpful, but they are not a cure-all. They
should be only one of many items in your bag of tricks to decide
whether objects are visible or invisible. They are not an excuse
to skip frustum culling, or precomputing visibility using portals
for static environments, or other standard visibility techniques.
For the question regarding the mesh sorting on depth, you shall use the depth buffer: essentially the mesh fragment is effectively rendered only if its distance from the viewport is less than the previous fragment in the same position. This make you aware of sorting meshes. This buffer is essentially free, and it allows you to improve performances since it discard more far fragments.
Yes. Like others have pointed out, OpenGL has to perform a lot of per-vertex operations to determine if it is in the frustum. It must do this for every vertex you send it. In addition to the processing overhead that must take place, keep in mind that there is also additional overhead in the transmission of those vertices from the CPU to the GPU. You want to avoid sending information to the GPU that it isn't going to use. Though the bandwidth between the CPU and GPU is quite good on modern hardware, there's still a limit.
What you want is a Scene Graph. Scene graphs are frequently implemented with some kind of spatial partitioning scheme, e.g., Quadtrees, Octrees, BSPTrees, etc etc. Spatial partitioning allows you to intelligently determine what geometries are visible. Instead of doing this on a per-vertex basis (like OpenGL is forced to do) it can eliminate huge spatial subsets of geometry at a time. When rendering a complex scene, the performance savings can be enormous.

How can I manage a cache texture in OpenGL?

I am writing a text renderer for an OpenGL application. Size, colour, font face, and anti-aliasing can be twiddled at run time (and so multiple font faces can appear on the screen at once). There are too many combinations to allocate one texture to each combination of string and attributes. However, only a small subset of the entire database of strings will be on the screen at any given time.
This leads me into the opportunity to create a cache for the strings that are being printed frame after frame. It has been mandated that I use only one texture for the entire operation, as creating a cache of many textures would incur a texture swapping penalty for every different string printed from the cache.
So I have before me a 2048x2048 texture, into which I can place whatever strings I can fit as they are being requested by the application for caching purposes. I have quickly realized that tracking the free space available in a two dimensional space is not trivial.
I have been looking at things like Best Fit and Next fit, but those seem to be suitable for 1d spaces.
How can I manage this cache texture in OpenGL?
Edit: I have since learned that this is an instance of a "2d packing problem".
What you have is the bin-packing problem.
Bad news first: It's NP-hard, so it's worth to find the optimal solution.
I've done such texture-caching for fonts as well. I didn't cached entire words but just the glyph images. That makes things a lot easier because all your images are roughly square-shaped. A simple grid based approach to keep track of the texture-memory worked pretty good.
In case I got glyphs that are larger than one of my grid-boxes I just allocated two or more boxes using brute force search (it didn't happend that often). In case I didn't found any suitable block I just randomly removed some glyphs from the cache to make free space.
That was much easier than keeping things in a last recently used cache and performed nearly as good.
Btw - you will always have some waste on texture memory for such a cache. Unless you're very tight on memory that shouldn't be a problem. You should use a small texture-format (8 bit alpha works well for fonts).
Also: If you make your grid-blocks a multiple of 8 pixels, and you can drop your antialiasing to 4 bits you can compress the glyphs into one of the compressed DXT or S3TC formats on the fly. The wasted texture-space becomes a non-issue that way.
If you are short on texture memory you could take a look at "Distance Field" or "Signed Distance Field" font rendering technique. You could use 512x512 texture per font family and you could render perfectly antialiased text of any size.
For that algorithm you need to generate a special texture, which contains distance from the texel to the edge of the texture. Take a look at original paper by Valve guys: http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf . There are some frameworks which utilize this. For instance latest version of Qt uses signed distance field for text rendering.
I have opted to use a simple approach. Divide the texture into variable height rows. The first texture to be placed in a row decides the height of the row. If a texture can fit into an existing row by height, check to see if there is enough width remaining and place it there. Otherwise start a new row. If a new row cannot be started, do not cache the string.

Resources