In our iPad app we need to load fbx files(of size 1MB - 100MB) as asset bundles. I started testing with a fbx file of size 26MB and it creates an asset bundle of size 17MB. When I run the test scene to load this asset bundle in the iPad it load after around 1 minute. And then I get a memory warning. I monitored the memory using Xcode and it seems that memory usage increase from around 1MB to 65Mb and then just before the model show up memory increase to around 175MB at once. Below is my code.
I have seen similar issues posted by other users. But I didn't see a proper solution to it in any of these threads. As I read in these threads, I think the memory increases when uncompressing the asset bundle .But I don't understand why it goes unto around 170MB.
What can we do to reduce the memory usage?
Thanks
public class CachingLoad : MonoBehaviour {
public string BundleURL;
public string AssetName;
public int version;
void Start() {
StartCoroutine (DownloadAndCache());
}
IEnumerator DownloadAndCache (){
// Wait for the Caching system to be ready
while (!Caching.ready)
yield return null;
BundleURL = "http://10.30.3.228:8080/TestDownload/assetBundle1.unity3d";
// Load the AssetBundle file from Cache if it exists with the same version or download and store it in the cache
using(WWW www = WWW.LoadFromCacheOrDownload (BundleURL, version)){
yield return www;
if (www.error != null)
throw new Exception("WWW download had an error:" + www.error);
AssetBundle bundle = www.assetBundle;
if (AssetName == "")
Instantiate(bundle.mainAsset);
else
Instantiate(bundle.Load(AssetName));
// Unload the AssetBundles compressed contents to conserve memory
bundle.Unload(false);
} // memory is freed from the web stream (www.Dispose() gets called implicitly)
}
}
Related
I am very new to backend development. Basically, I want to create a robust & simple application that will accept a zip file URL in the params and then download the zip file from the URL and finally extract the zip and return the bin file inside it. Note: The zip file size can range from 5MB to 150MB. I have tried doing the described operation in the following manner.
package la.sample
import io.ktor.application.Application
import io.ktor.application.call
import io.ktor.client.HttpClient
import io.ktor.client.request.get
import io.ktor.http.HttpStatusCode
import io.ktor.response.respond
import io.ktor.response.respondFile
import io.ktor.routing.get
import io.ktor.routing.routing
import java.io.*
fun Application.startServer() {
routing {
get("/get-bin") {
//Gets the AWS Url from params
val awsUrl = call.request.queryParameters.get("url") ?: "Error"
// Download the zip file from the AWS URL
val client = HttpClient()
val bytes = client.get<ByteArray>(awsUrl)
//Create a temp file on the server & write the zip file bytes into it.
val file = File(".", "data.zip")
file.writeBytes(bytes)
//Call a method to unzip the file
unzipAndReturnBinFile()?.let {
call.respondFile(it) //respond with bin file
} ?: kotlin.run{
call.respond(HttpStatusCode.InternalServerError)
}
}
}
}
fun unzipAndReturnBinFile(): File? {
var exitVal = 0
//Command shell to unzip the file
Runtime.getRuntime().exec("unzip bundle.zip -d data").let {//command shell to unzip the zip file
exitVal += it.waitFor()
}
//Check if the command executed successfully
if (exitVal == 0) {
var binFile: File? = null
//check if the extracted files contain `bin`
File("data").listFiles().forEach {
if (it.name.contains(".bin")) {
binFile = it
}
}
//return bin or null otherwise
return binFile
} else {
throw Exception("Command Shell Execution failed.")
}
}
The above codes work fine in local machine, irrespective of the Zip file size. But when it is deployed to AWS, the code breaks if the zip or the bin file is larger than 100 MB and gives a java.lang.OutOfMemoryError error. I will be very thankful if someone can suggest to me a proper way of handling large file operations in the backend with the ability to handle 100s of such concurrent calls. Thank you.
Java heap size of my remote machine is around 1 GB.
your problem is not from the unzipping procedure,
runtime exec command runs on a different process and only use of min size on the heap of the forked process to save instruction of return address.
the problem that causing the outOfMemory is in these lines
val bytes = client.get<ByteArray>(awsUrl)
val file = File(".", "data.zip")
file.writeBytes(bytes)
it will only take 6 concurrent requests of size 150Mb to finish all your Heap size.
instead of waiting for the file to fully download before saving it to the disk, you should use Stream, and then every time a chunk of data downloaded you saving it to the disk then in that way the full size of the downloaded file will never be at the RAM at the same time.
Use apache commons-io, for example :
FileUtils.copyURLToFile(URL, File)
or if you would like more control over the procedure try using Ben Noland answer
https://stackoverflow.com/a/921408/4267015
Based on #Naor's comment, I have updated the code to accept the multipart file and write every small chuck (part) to another file as soon as I get them, without storing the entire data in the memory. It has solved the issue. Below is the updated code snippet.
val file = File(".", Constant.FILE_PATH)
call.receiveMultipart().apply {
forEachPart {
if (it is PartData.FileItem) {
it.streamProvider().use { input ->
file.outputStream().buffered().use { output -> input.copyToSuspend(output) }
}
}
it.dispose
} }
I wrote the following method:
/**
* Downloads an arbitrary file to the cache asynchronously, if the current
* platform has a cache path, or to the app home; if the file was previously
* downloaded and if it's still available on the cache, it calls the
* onSuccess callback immediatly.More info:
* https://www.codenameone.com/blog/cache-sorted-properties-preferences-listener.html
*
* #param url The URL to download.
* #param extension you can leave it empty or null, however iOS cannot play
* videos without extension (https://stackoverflow.com/q/49919858)
* #param onSuccess Callback invoked on successful completion (on EDT by
* callSerially).
* #param onFail Callback invoked on failure (on EDT by callSerially).
*/
public static void downloadFileToCache(String url, String extension, SuccessCallback<String> onSuccess, Runnable onFail) {
FileSystemStorage fs = FileSystemStorage.getInstance();
if (extension == null) {
extension = "";
}
if (extension.startsWith(".")) {
extension = extension.substring(1);
}
String name = "cache_" + HashUtilities.sha256hash(url);
if (!extension.isEmpty()) {
name += "." + extension;
}
String filePath;
if (fs.hasCachesDir()) {
// this is supported by Android, iPhone and Javascript
filePath = fs.getCachesDir() + fs.getFileSystemSeparator() + name;
} else {
// The current platform doesn't have a cache path (for example the Simulator)
String homePath = fs.getAppHomePath();
filePath = homePath + fs.getFileSystemSeparator() + name;
}
// Was the file previously downloaded?
if (fs.exists(filePath)) {
CN.callSerially(() -> onSuccess.onSucess(filePath));
} else {
Util.downloadUrlToFileSystemInBackground(url, filePath, (evt) -> {
if (fs.exists(filePath)) {
CN.callSerially(() -> onSuccess.onSucess(filePath));
} else {
CN.callSerially(onFail);
}
});
}
}
It works. It's similar to some methods provided by the Util class, but with two main differences: the first is that the Util class provides methods only to download images to the cache, while I want to download arbitrary files; the second is that I can assume that the same url always returns the same file, so I don't need to download it again if it's still in the cache (while the Util methods always download the files when invoked).
However, I have some doubts.
My first question is about how caching works: currently I'm using this method to download images and videos to cache (in a chatting app), assuming that I don't need to care about when the files will be not more necessary, because the OS will delete them automatically. Is it so, right? Is it possible that the OS deletes files while I'm using them (for example immediately after storing them to the cache), or Android and iOS delete only older files?
I wrote this method to store arbitrary files. Is there any reasonable limit in MB to the file size that we can store in the cache?
Finally, I have a doubt about the callSerially that I used in the method. Previously I didn't use that, but I got odd results: my callbacks do UI manipulations and frequently (but not always) something went wrong. I solved all my callbacks problems adding the callSerially, so callSerially is the solution. But... why? The odd fact is that the ActionListener of Util.downloadUrlToFileSystemInBackground is called under the hood by the addResponseListener(callback) of a ConnectionRequest instance, so the callback is already invoked in the EDT (according to the javadoc). To be sure, I tested CN.isEdt() in the callbacks without adding the callSerially, and it returned true, so in theory callSerially is not necessary, but in practice it is. What's wrong in my reasoning?
Thank you for the explanations.
As far as I know the cache directory is just a directory that the OS is allowed to delete if it needs space. I don't think it will delete it for an active foreground application but that might vary.
There are no limits other than storage. But you still need to consider that the OS won't just clean that directory for you. It will only flush it when storage is very low and even then not always. So you still need to store data responsibly.
I think only the first callSeially has an impact. It defers the result to the next EDT loop instead of continuing in the existing thread.
I need to serve files to authenticated users and recognise that using PHP will include a performance penalty, however what I've experienced so far seems to be unworkable.
I have a very simple controller action which sends the file:
public function view($id = null) {
$id = $id | $this->params->named['id'];
if (!$this->Attachment->exists($id)) {
throw new NotFoundException(__('Invalid attachment'));
}
$this->autoRender = false;
$this->Attachment->recursive = -1;
$file = $this->Attachment->findById($id);
$this->response->file(APP . DS . $file['Attachment']['dir']);
return $this->response;
}
A small (55 KB) PNG file takes 8 seconds to load using this method, where as if I move the file to the webroot directory and load it directly it takes less than 2.5 seconds. From looking at Chrome Dev Tools, the 'Receiving' part of the response is taking > 7s (compared with 1.5s direct).
A medium sized PDF file (2.5MB) takes over 2 minutes through CakeResponse, compared to ~4s directly. Surely I must be missing something in my controller action as this must be unworkable for anyone?
EDIT: CakePHP version is 2.4.1.
Thanks to the suggestion to use Xdebug I was able to quickly track down the problem.
In CakeResponse, there is the following function:
/**
* Flushes the contents of the output buffer
*
* #return void
*/
protected function _flushBuffer() {
//#codingStandardsIgnoreStart
#flush();
#ob_flush();
//#codingStandardsIgnoreEnd
}
Obviously with the error suppression operator the calls to flush and ob_flush would not normally cause a problem.
However, I also have Sentry as a remote debugging tool installed. This ignores the error suppression operator, reports that there is no buffer to flush (because ob_start has not been called) and in doing so outputs the contents of the file to the log file!
I'm building an application that uses an embedded H2 database. I used the tutorial to test it out and everything seemed to work fine:
import java.sql.*;
public class Test {
public static void main(String[] a)
throws Exception {
Class.forName("org.h2.Driver");
Connection conn = DriverManager.
getConnection("jdbc:h2:~/test", "sa", "");
// add application code here
conn.close();
}
}
I am curious however, in my home directory I now have a "test.h2" file along with a "test.lock" file. Why does an empty database end up being 2 MB? It seems kind of large, I would expect something in KB at most, given that all that is in it would be some default empty memory and some instructions for storing data in it. Is 2 MB the default memory allocated?
The database file size of an empty (or almost empty) database is only 2 MB while the database is open. If it is closed, the file shrinks.
On some file systems, resizing files is relatively slow. Because of that, H2 allocates more space than it needs: to reduce the number of resize operations.
The exact algorithm to expand the file size may change in the future. Currently, the minimum file size is about 2 MB. The file grows at 35%, but at most 256 MB at a time.
When you close the database, the file shrinks to the real size needed.
Here is some background about my app:
I am developing an Android app that will display a random quote or verse to the user. For this I am using an SQLite database. The size of the DB would be approximately 5K to 10K records, possibly increasing to upto 1M in later versions as new quotes and verses are added. Thus the user would need to update the DB as and when newer versions are of the app or DB are released.
After reading through some forums online, there seem to be two feasible ways I could provide the DB:
1. Bundle it along with the .APK file of the app, or
2. Upload it to my app's website from where users will have to download it
I want to know which method would be better (if there is yet another approach other than these, please do let me know).
After pondering this problem for some time, I have these thoughts regarding the above approaches:
Approach 1:
Users will obtain the DB along with the app, and won't have to download it separately. Installation would thereby be easier. But, users will have to reinstall the app every time there is a new version of the DB. Also, if the DB is large, it will make the installable too cumbersome.
Approach 2:
Users will have to download the full DB from the website (although I can provide a small, sample version of the DB via Approach 1). But, the installer will be simpler and smaller in size. Also, I would be able to provide future versions of the DB easily for those who might not want newer versions of the app.
Could you please tell me from a technical and an administrative standpoint which approach would be the better one and why?
If there is a third or fourth approach better than either of these, please let me know.
Thank you!
Andruid
I built a similar app for Android which gets periodic updates with data from a government agency. It's fairly easy to build an Android compatible db off the device using perl or similar and download it to the phone from a website; and this works rather well, plus the user gets current data whenever they download the app. It's also supposed to be possible to throw the data onto the sdcard if you want to avoid using primary data storage space, which is a bigger concern for my app which has a ~6Mb database.
In order to make Android happy with the DB, I believe you have to do the following (I build my DB using perl).
$st = $db->prepare( "CREATE TABLE \"android_metadata\" (\"locale\" TEXT DEFAULT 'en_US')");
$st->execute();
$st = $db->prepare( "INSERT INTO \"android_metadata\" VALUES ('en_US')");
$st->execute();
I have an update activity which checks weather updates are available and if so presents an "update now" screen. The download process looks like this and lives in a DatabaseHelperClass.
public void downloadUpdate(final Handler handler, final UpdateActivity updateActivity) {
URL url;
try {
close();
File f = new File(getDatabasePath());
if (f.exists()) {
f.delete();
}
getReadableDatabase();
close();
url = new URL("http://yourserver.com/" + currentDbVersion + ".sqlite");
URLConnection urlconn = url.openConnection();
final int contentLength = urlconn.getContentLength();
Log.i(TAG, String.format("Download size %d", contentLength));
handler.post(new Runnable() {
public void run() {
updateActivity.setProgressMax(contentLength);
}
});
InputStream is = urlconn.getInputStream();
// Open the empty db as the output stream
OutputStream os = new FileOutputStream(f);
// transfer bytes from the inputfile to the outputfile
byte[] buffer = new byte[1024 * 1000];
int written = 0;
int length = 0;
while (written < contentLength) {
length = is.read(buffer);
os.write(buffer, 0, length);
written += length;
final int currentprogress = written;
handler.post(new Runnable() {
public void run() {
Log.i(TAG, String.format("progress %d", currentprogress));
updateActivity.setCurrentProgress(currentprogress);
}
});
}
// Close the streams
os.flush();
os.close();
is.close();
Log.i(TAG, "Download complete");
openDatabase();
} catch (Exception e) {
Log.e(TAG, "bad things", e);
}
handler.post(new Runnable() {
public void run() {
updateActivity.refreshState(true);
}
});
}
Also note that I keep a version number in the filename of the db files, and a pointer to the current one in a text file on the server.
It sounds like your app and your db are tightly bound -- that is, the db is useless without the database and the database is useless without the app, so I'd say go ahead and put them both in the same .apk.
That being said, if you expect the db to change very slowly over time, but the app to change quicker, and you don't want your users to have to download the db with each new app revision, then you might want to unbundle them. To make this work, you can do one of two things:
Install them as separate applications, but make sure they share the same userID using the sharedUserId tag in the AndroidManifest.xml file.
Install them as separate applications, and create a ContentProvider for the database. This way other apps could make use of your database as well (if that is useful).
If you are going to store the db on your website then I would recommend that you just make rpc calls to your webserver and get data that way, so the device will never have to deal with a local database. Using a cache manager to avoid multiple lookups will help as well so pages will not have to lookup data each time a page reloads. Also if you need to update the data you do not have to send out a new app every time. Using HttpClient is pretty straight forward, if you need any examples please let me know