Hi i'm trying to store an array filled with objects so that it doesn't disappear if the app is closed completely.
The Problem:
if I use core data and convert the array to NSData then it works. But the app freezes while it's processing the array.
I've also tried the transformable datatype but i cant't get it to work.
And I can't use NSUserdefaults either because it doesn't support images.
Does anyone have an idea how i might solve this.
i'm quite a newbie to programming so this might be an entirely wrong approach.
First save the images in individual files with unique file names in an image directory in the Documents directory. Put the unique file names in the array, not the images.
Then depending on your needs either save the individual per image information in Core Data if quick random access is required. Or save the array in a file.
For 1500 strings of ~100 characters each saving in a single file is probably fine, I would start there and only move up to Core Data if there is a performance problem Core Data would resolve it.
As Ken Beck says: "Do the simplest thing that could possibly work." I don't believe that having 750 images in the array would really work if they were of any substantial size.
Do not use NSUserDefaults.
Core Data is overkill for simply saving an array to disk.
What kind of objects are you saving?
If you already have logic to convert the array to NSData why not just save the file to either your documents directory or caches directory (caches if it can be recreated if purged, documents if it is unique and contains user state info.)
Edit:
zaph raises some good points in his question. How big is this array (number of elements and total data size.) Is it reasonable to load it all into memory?
If you are looking for a random-access way to load one element at a time, then a database might be reasonable.
The specific solution depends on the particulars of your problem, so we need more info.
Related
This past year, I took a coding class on creating games using javascript. It is a basic game like Asteroids. When the game ends, it shows the top 5 high scores. The only problem is that when the program is restarted, the array that holds the scores is reset.
I want to store the high scores in a text file or spreadsheet. But, I cannot find a way to get my program to pull information from an outside file and assign it to either a variable, or I would rather put it into an array.
The second part is that when the game ends, it would need to send the updated array to the outside file if it is updated. Everything I look up involves HTML and CSS and we didn't learn this.
Is there any viable way to do this in Javascript?
It seems like the question is 'how do I read and write files from javascript'? The answer largely depends on where you're running the Javascript.
It doesn't sound like you're working in a browser, which makes it likely that you're using Node JS. If this is the case, you'll want to look at the File System API. Specifically, you'll want to create a filehandle by using fsPromises.open(). Once you have this reference to a location on your hard drive, you'll use filehandle.writeFile() to write a string directly to the file, and filehandle.readFile() to read a string directly from the same file.
Is it possible to grab the size of a cloudfiles object without retrieving the whole object?
I have 500,000 files that I want to know the byte size of, but if I request the whole object on each one it will cost me $100 in bandwidth charges.
I know I can get a list of objects in a container, but that only seems to give me the name of the objects?
Thanks, and help appreciated.
Using the PHP SDK, you have a couple of options:
If you are looking for the size of each file, you can retrieve the list of files in a container first, then loop over them, call the getContentLength() method on each file in the loop.
If you are looking for the total size taken up by all your 500,000 files, you can simply get the size of the container(s) they are in. Here is the code for that: https://github.com/rackspace/php-opencloud/blob/master/docs/userguide/ObjectStore/USERGUIDE.md#get-bytes-used.
Im planning to use Leaflet Draw as part of a special wiki with an embedded map. Users should be able to draw geo-objects that are related to one or more pages in the wiki. As the wiki-pages the objects are saved in a database and can be modified by every user.
Problems:
How can i limit the number of editable objects to only one at a time?
How to keep the database consistent if two users are editing the same object at the same time?
How can i generte multi-objects/combine several objects (e.g. polygons) to a super-object (multi-polygon)?
Does anybody know some similiar approches to my idea?
Thanks.
You will have a single FeatureGroup for leaflet.draw's objects that can be edited. Simply figure out which objects will be edited and which won't be and add them to seperate FeatureGroups.
This can be handled in a few ways, maybe have a look around at general database consistency for this.
I'm not sure what you mean, maybe have a look at Well Known Text it might help you with storage here.
First, a bit of context:
I'm trying to implement a URL shortening on my own server (in C, if that matters). The aim is to avoid long URLs while being able to restore a context from a shortened URL.
Currently I have a implementation that creates a session on the server, identified by a certain ID. This works, but consumes memory on the server (and is not desired since it's an embedded server with limited resources and the main purpose of the device isn't providing web pages but doing other cool stuff).
Another option would be to use cookies or HTML5 webstorage to store the session information in the client.
But what I'm searching for is the possibility to store the shortened URL parameters in one parameter that I attach to the URL and be able to re-construct the original parameters from that one.
First thought was to use a Base64-encoding to put all the parameters into one, but this produces an even larger URL.
Currently, I'm thinking of compressing the URL parameters (using some compression algorithm like zip, bz2, ...), do the Base64-encoding on that compressed binary blob and use that information as context. When I get the parameter, I could do a Base64-decoding, de-compress the result and have hands on the original URL.
The question is: is there any other possibility that I'm overlooking that I could use to lossless compress a large list of URL parameters into a single smaller one?
Update:
After the comments from home, I realized that I overlooked that compressing itself adds some overhead to the compressed data making the compressed data even larger than the original data because of the overhead that for example zipping adds to the content.
So (as home states in his comments), I'm starting to think that compressing the whole list of URL parameters is only really useful if the parameters are beyond a certain length because otherwise, I could end up having an even larger URL than before.
You can always roll your own compression. If you simply apply some huffman coding, the result will always be smaller (but then base64 encoding it, it'll grow a bit, so the net effect may perhaps not be optimal).
I'm using a custom compression strategy on an embedded project I work with where I first use a lzjb (a lempel ziv derivate, follow link for source code, really tight implementation (from open solaris)) followed by huffman coding the compressed result.
The lzjb algorithm doesn't perform too well on very short inputs, though (~16 bytes, in which case I leave it uncompressed).
I'm working on a basic editor application. It uses an array of varying size that I want to store to disk. This will eventually be in an AIR application, but for now it's just an AS3 project in Flex.
I want to store the array in a file. The application edits the data, so it doesn't need to be human readable. I want it to be in whatever format will be quickest to store and load back into the array when I need that data again.
Any recommendations?
Edit: It strikes me that importing/exporting in such a way that it can be immediately cast as an Array() would probably be the cheapest thing rather than some sort of iterating - if that's possible. Another obvious option is getting the data as a simple comma delineated string and using the String.split() function to get an array. Though again, the question is what would be cheapest - and I'm not quite convinced that's it.
I'll also add that it needs to be in some sort of permanent file, so a shared object - while possibly the fastest, isn't really a long term solution.
I think the fastest and easiest way is to use a shared object. It stores native objects, so there is no serialization / deserialization steps involved. Just assign the value and read it back.
Performance wise, probably the fastest route as well. If you are looking for a large dataset and are sure it's an AIR app, you can use AIR's db, but that will definitely take much more work.
First, take a look at this answer.
As for saving the contents of an Array, consider JSON using the export tools provided by Adobe.