Many sprites need to load on mobile devices in unity3D - mobile

My 2D game has 3 characters which each character has 7 types of animation that is contained to many frames. Those are around 1050 frames (sprites) for all 3 characters animations. I make atlas by these sprites but right now it is around 60 2048x2048 atlases with compressed format.
So on some devices which I've tested, The game is not loaded but on some other devices it is. I think it's because of too many atlases which they can't load on these mobile devices RAM.
I've tried to make these frames as small as possible and put them in atlas but as I said before the number of atlases is 60 and I can't reduce number of frames or size of them more than this.
What do you think guys and what can I do ?

There is a limitation in mobile devices for how many images can load in ram. This is a bad idea to import your sprite with 2048*2048 dimension and then compress them to something like 512*512. You have to reduce the size of those sprite in an image editor program and then you can join 16 512*512 images and make a 2048*2048 sprite. Now you just load 1 image in mobile ram instead of 16 images.

Related

Huge array of libSDL textures

I am developing an app that presents the user with a potentially very large user-generated image gallery, 10 or so images at the time.
The app is to be implemented in C using libSDL and 2D textures for accelerated rendering.
The overall gist of it in pseudocode is:
while cycle < MAX_CYCLES
while i < MAX_STEPS
show a gallery of 10 image thumbnails
while (poll events)
if event == user has pushed next
break
i++
scramble image galleries using a genetic algorithm
cycle++
I could load every image from disk at initialization time, creating all the required textures, so image presentation is fast. But of course this would be slow and potentially allocate a huge array of textures.
I will scale down the images for presentation, so this could mitigate the problem, but the total size of the collection depends on user preference. Surely I can cap the maximum value, but it cannot be small.
I was thinking about unloading every unused image at every step of every cycle, using SDL_FreeSurface and SDL_DestroyTexture. This would mean reloading the data from disk, recreating the surface and recreating the texture each time. Is this a viable approach?
Also I understand that SDL textures are stored in GPU memory, so the amount of available memory on the card should be my main concern. Am I right?
In summary, is there a recommended method to deal with this type of situation?
I would keep always 3 slides in memory.
Prev - Current - Next
While presenting the current slide, preload the next slide and unload the slide no (Current - 2).
Also I understand that SDL textures are stored in GPU memory, so the amount of available memory on the card should be my main concern. Am I right?
Not quite, if the GPU (Driver) seems it necessary, it will outsource unused texture data to RAM.
For Example, if you're presenting 10 Images and thus have 30 Images present in memory, then for 2K (with alpha) (1920 x 1080 x 4) you will need approx. 250 MB.
As long as you don't run on an embedded system (or very old, outdated system), this shouldn't be a big concern.

What do the channels do in CNN?

I am a newbie in CNN and I want to ask what does the channels do in SSD for example? For what reason they exist? For example 18X18X1024 (third number)?
Thanks for any answer.
The dimensions of an image can be represented using 3 numbers. For example, a color image in CIFAR-10 dataset has a height of 32 pixels, width of 32 pixels and is represented as 32 x 32 x 3. Here 3 represents the number of channels in your image. Color images have a channel size of 3 (usually RGB), while a grayscale image will have a channel size of 1.
A CNN will learn features of the images that you feed it, with increasing levels of complexity. These features are represented by the channels. The deeper you go into the network, the more channels you will have that represents these complex features. These features are then used by the network to perform object detection.
In your example, 18X18X1024 means your input image is now represented with 1024 channels, where each channel represents some complex feature/information about the image.
Since you are a beginner, I suggest you look into how CNNs work in general, before diving into object detection. A good start would be image classification using CNNs. I hope this answers your question. Happy learning!! :)

Big SCNGeometry SceneKit for iOS

I am working on a cocoa/iOS projet.
I have a common swift class which manage a Scenekit scene.
I want to draw a big terrain (about 5000x5000 points).
I have 2 triangles per 4 points. I have created a scngeometry object for the whole terrain (is it a good thing ?)
I decided to store those points in a 6-Float structure (x,y,z and r,g,b). I tried to create an empty array or to allocate a big array at the begining : i got the same issue.
I work with Int datatype for indices array.
The project works fine on Cocoa but i get memory errors on iOS. I think this is because of the need to have a big and contigous array for vertex.
I tried to create several chunks of geometry objects but scene kit does not like if we erase a previous buffer.
What is the best practice in this case ?
Is there a way to store vertex on the mass storage instead of memory arrays/buffers ?
Thanks
So...twice as many terrain points as there are pixels on a shiny new 5K display? That's a huge amount of memory to be using at once on iOS. And you won't be able to see that resolution on an iOS device.
So how about:
Break your 25 million pixel terrain into smaller tiles, each in its own SCNNode. Loop through the tiles, create one SCNNode, throw away the 6-Float array for that tile and move to the next.
Use SCNLevelOfDetail to produce much simpler versions of those nodes, for display when they're very far away.
Do the construction work on OS X. Archive your scene (NSSecureCoding). Bundle that scene into the iOS app.
Consider using reference nodes in your main SCNScene, and archive each tile as a separate SCNScene file.
Hopefully you're already using triangle strips, not triangles, to build your geometry.

Libgdx texture atlas vs files

So in my game i have more than 32000 images (size: 32x32). How to store this amount of images? What could better: loading these images as separate files or as one big texture atlas?
When you load a texture (image) in your game by doing you load a texture in GPU.
So let's say you have 3200 images and you load them separately that means 3200 textures loaded on the GPU. That's harsh and will eat loads of memory.
Now let me explain what is TextureRegion
TextureRegion, takes an area from the Texture according to the dimension you provide, the advantage of having it is that you don't have to load textures again and again and the bigger advantage is you don't have to load each and every texture on GPU as you can do it directly by loading one big texture and extracting sub regions(TextureRegions) from it.
Now because you want to use TextureRegions, it will be hard to know the dimensions of each and every sub image to load them from Texture Sheet. So what we do is we pack the Textures into a bigger Texture using TexturePacker(an application) which then creates a .pack file. It will pack every texture into one image AND create a .pack file. Now when you load the .pack file, it is loaded using TextureAtlas class
For example imagine a pokemon pack file which has all the pokemons into it.
TextureAtlas pokemonFrontAtlas = new TextureAtlas(Gdx.files.internal("pokemon//pokemon.pack"));
Now you packed 3200 files using Texture Packer and you want to load a image(Texture) which has file name as "SomePokemon".
Now to get a particular TextureRegion from it, you do
pokemonFrontAtlas.findRegion("SomePokemon")
findRegion(String name) returns you the textureRegion from the TextureAtlas.
A TextureAtlas class contains a collection of AtlasRegion class which extends TextureRegion class.
See Javadocs for more details
TextureAtlas

Most performant image format for SCNParticles?

I've been using 24bit .png with Alpha, from Photoshop, and just tried a .psd which worked fine with OpenGL ES, but Metal didn't see the Alpha channel.
What's the absolutely most performant texture format for particles within SceneKit?
Here's a sheet to test on, if needs be.
It looks white... right click and save as in the blank space. It's an alpha heavy set of rings. You can probably barely make them out if you squint at the screen:
exaggerated example use case:
https://www.dropbox.com/s/vu4dvfl0aj3f50o/circless.mov?dl=0
// Additional points for anyone can guess the difference between the left and right rings in the video.
Use a grayscale/alpha PNG, not an RGBA one. Since it uses 16 bits per pixel (8+8) instead of 32 (8+8+8+8), the initial texture load will be faster and it may (depending on the GPU) use less memory as well. At render time, though, you’re not going to see much of a speed difference, since whatever the texture format is it’s still being drawn to a full RGB(A) render buffer.
There’s also PVRTC, which can get you down as low as 2–4 bits per pixel, but I tried Imagine’s tool out on your image and even the highest quality settings caused a bunch of artifacts like the below:
Long story short: go with a grayscale+alpha PNG, which you can easily export from Photoshop. If your particle system is hurting your frame rate, reduce the number and/or size of the particles—in this case you might be able to get away with layering a couple of your particle images on top of each other in the source texture atlas, which may not be too noticeable if you pick ones that differ in size enough.

Resources