Do empty transforms take up much CPU power? - mobile

So long story short, I'm developing a game for Mobile set in a city and there are a lot of objects being brought in and out of play as the player moves about.
I have tried numerous methods for seamlessly loading in/out objects.
First I tried instantiating and destroying objects for loading/unloading. This had noticeable spikes for even loading in something simple like a generic 3D box.
Second attempt, I put the Instantiating/Destroying calls into their own Coroutines, this made the spikes less severe, but they were still noticeable.
Thirdly, I decided to pre-instantiate all the objects I'd need and then keep all the ones not in play as deactive (SetActive(false)). It turns out that setting active to true (even done inside a coroutine) had worse performance than instantiating the objects.
So, I finally arrived at my last idea for loading. I preloaded all the needed objects, then manually went through each one's children, disabling each component that could use up CPU. Such components as scripts, renderers, colliders, audio sources, particle systems, rigidbody (set to isKinematic = true) etc were all disabled, leaving only an object with children transforms. Now I can finally enable an object (enable all its components) and the game has no spikes in performance.
However, this last solution has its own draw back. If I preload too many objects the games FPS will be significantly impacted. Event though there is nothing enabled inside the object besides its transforms.
So my question is, does having many transforms (non moving) in the scene cause a significant hit to the CPU usage? and, If so, what is the best way to do continuous loading/unloading of game objects for Mobile?

Use a component called lod group. You can set then your low poly models and it will unable and enable the mesh renderers according with the distance of your camera.

Related

Artifacts and problems with shadows of directional light & point light

So I'm using React, Three Fiber as well as the drei library and cannon for physics.
I am making an apartment viewer as a personal project in which you can walk around in - so far everything works fine. To make it later on possible to load in the apartment model from a database (and make the creation process easier for multiple models), the transformations for the light switches, aktivatable point lights and the apartments collision boxes are copied from objects within the gltf file.
To prevent the collision boxes from rendering or otherwise effecting the rendering process, they are made invisible. (I also tried to set child.castShadow = false with no effect).
For some reason the shadows are corrupted: unwanted point light shadows.
I also tried to change some properties of the original child: Object3D properties in the Apartment component (the only place where the boxes could affect the shadows), without changing results.
Another thing is that there doesn't seem to be any options to adjust shadows anymore. Properties like shadowBias, shadowMapWidth etc. are deprecated. By hovering over it I get something like #deprecated — Use shadow.mapSize.width instead. At least I couldn't find a solution to that, also because the Three Fiber documentation isn't that extensive. Just using them doesn't work either.

Polygon limit for 3D objects used for Three.js

We are in the process of developing our first website made using Three.js. It of course uses a collection of 3D models, some of which are fairly busy cityscapes. We made them low poly, and are avoiding animation at this point, but would like to add some moving elements eventually.
My 3D designer is more used to working with objects used in Unity games, and he says that the industry standard is to keep each model below 100K polygons. Is there a similar limit that is typically used for Three.js?
In my mind, the issue should rather be focussed on file size, so we are trying to optimize this of course. I was just wondering if anyone knows whether there are other concerns to take into consideration in terms of poly-count?

Fanned out cards in WPF- performance issues

In my WPF app I have a control representing a pack of 20 cards (each about 150x80 px) that fan out in an arc, so they're all slightly overlapping in the centre of the arc. When the control is added there's an animation to fan them out.
After that, the fan/control can be moved around, and when the user hovers over a card it expands and then goes back to normal size when they move off it.
This all works fine, but has a noticeable effect on performance- everything is very jerky, presumably because when other things move all the overlapping stuff and transforms in the control are being constantly recalculated/redrawn.
Any suggestions for how I can improve performance while still keeping individual cards in the fan responsive?
To find the source of the slowdown you need to profile.
Try to find out whether or not WPF is switching back to software rendering or not.
After that try to run on a different computer with other (better) hardware/graphics card.
If it doesn't get any better there might be errors in your app.

Prevent WPF stutter / dropped frames

I've written a simple game-like app in WPF. The number of objects drawn is well within WPF capabilities - something like a few hundred ellipses and lines with simple fills. I have a DispatcherTimer to adjust the positions of the objects from time to time (1/60th of a second).
The code to compute the new positions can be quite intensive when there are lots of objects, and can fully load a processor. Whenever this occurs, WPF starts skipping frames, presumably trying to compensate for the "slowness" of my application.
What I would much rather happen is for all the frames to be drawn anyway, only slower. The dropped frames do not add any speed - because visual updates were pretty quick anyway.
Can I somehow force WPF to have my changes to the visuals be reflected on the screen regardless of whether WPF thinks it's a good idea?
Unfortunately I don't think there's anything you can do about this, although I will happily be corrected! WPF is designed to be an application creation framework, not a games library, so it priortises application performance and "usability" over framerate. This actually works very well when producing applications as it allows you to use quite rich animations and effects while maintaining perceived performance on lower end systems.
The only thing I think you might be able to try is push your movement code's Dispatcher priority down slightly to below Render (Loaded is the next one down) using something like:
this.Dispatcher.BeginInvoke(DispatcherPriority.Loaded, MoveMyStuff);
I don't have any kind of test harness to verify if that will help though.
This issue was fixed by using a Canvas with an OnRender override instead of creating and moving UIElements. This does mean that everything needs to be drawn by hand in OnRender, but it can now run at any FPS consistently, without skipping any frames.

WPF render performance with BitmapSource

I've created a WPF control (inheriting from FrameworkElement) that displays a tiled graphic that can be panned. Each tile is 256x256 pixels at 24bpp. I've overridden OnRender. There, I load any new tiles (as BitmapFrame), then draw all visible tiles using drawingContext.DrawImage.
Now, whenever there are more than a handful new tiles per render cycle, the framerate drops from 60fps to zero for about a second. This is not caused by loading the images (which takes in the order of milliseconds), nor by DrawImage (which takes no time at all, as it merely fills some intermediate render data structure).
My guess is that the render thread itself chokes whenever it gets a large number (~20) of new BitmapSource instances (that is, ones it had not already cached). Either it spends a lot of time converting them to some internal DirectX-compatible format or it might be a caching issue. It cannot be running out of video RAM; Perforator shows peaks at below 60MB, I have 256MB. Also, Perforator says all render targets are hardware-accelerated, so that can't be it, either.
Any insights would be appreciated!
Thanks in advance
Daniel
#RandomEngy:
BitmapScalingMode.LowQuality reduced the problem a little, but did not get rid of it. I am already loading tiles at the intended resolution. And it can't be the graphics driver, which is up-to-date (Nvidia).
I'm a little surprised to learn that scaling takes that much time. The way I understood it, a bitmap (regardless of its size) is just loaded as a Direct3D texture and then hardware-scaled. As a matter of fact, once the bitmap has been rendered for the first time, I can change its rotation and scale without any further freezes.
It's not just with a large number of images. Just one large image is enough to hold up rendering until it has been loaded in, and that can be quite noticable when your image dimensions start getting up in the thousands.
I do agree with you that it's probably the render thread: I did a test and the UI thread was still happily dispatching messages while this render delay was taking place from trying to display a fully pre-cached BitmapImage.
It must be doing some sort of conversion or preparation on the image, like you were speculating. I've tried to mitigate this in my app by "rendering" but hiding the image, then revealing it when I need to show it. However this is less than ideal because the rendering freezes happen anyway.
(Edit)
Some followup: After a discussion on the MS WPF alias I found what was causing the delays. On my Server 2008 machine it was a combination of old video drivers that don't support the new WDDM driver model and a delay for resizing the image.
If the source image size is different from the display size, that will delay the render thread before the image shows up. By default an image is set to the highest quality, but you can change the scaling options for rendering by calling RenderOptions.SetBitmapScalingMode(uiImage, BitmapScalingMode.LowQuality); . Once I did that, the mysterious freeze before displaying an image went away. An alternative, if you don't like the quality drop in scaling, is to load the BitmapImage with DecodePixelWidth/Height equal to the size it will be displayed at. Then if you load the BitmapImage on a background thread, you should have no delay in displaying it.
Also try these;
/* ivis is declared in XAML <Image x:Name="iVis" UseLayoutRounding="True" SnapsToDevicePixels="True" /> */
iVis.Stretch = Stretch.None;
RenderOptions.SetBitmapScalingMode(iVis, BitmapScalingMode.NearestNeighbor);
RenderOptions.SetEdgeMode(iVis, EdgeMode.Aliased);
VisualBitmapScalingMode = BitmapScalingMode.NearestNeighbor;
iVis.Source = **** your bitmap source ****
I was having some trouble with performance when using a huge amount of "A" channel color's, waiting until after the image had rendered to scale it worked much better for me.
Also, as you said your using a tiled graphic?
You would usually use a TileBrush to simply set as the Brush on your FrameworkElement. If you are animating them or adding new ones dynamically, you could generate your brushes then apply them to your object as you go manually too, be sure to Freeze them if you can. Also, VisualBitmapScalingMode is a property of any Visual.

Resources