Some questions considering HTML5 Client-Side Storage:
How much data in Local Storage is considered too much?
Is there a limit on the size?
Since its saved on files will it by any means have any effect on the browsers speed?
Why use Database storage? is it indexed?
Why not use LocalStorage where key is the index (if unique) of the record, and the value is the record JSON stringified?
EDIT
Just a follow up to the Answer, after the WebDatabase project was dropped, all browser are proceeding to implement the soon to be standard " IndexedDB "
Check this other Question.
HTML5 localStorage size limit for subdomains
It depends on your application.
5 mb is the max size
No impact.
Database storage is deprecated, so it will not receive more updates. Current browsers support it, yet their implementation may not be standard. So it is not a good idea to use it.
Related
I have a medium sized objects thats maximum size is 10mb .
What is the maximum size to store and persist data in redux using redux-persist ?
Also, What is the best approach for such sized data , to use redux-persist or use a database like realm ?
Update
When this post was originally written it was not possible to set the size of AsyncStorage. However in May 2019 the following commit changed that.
You can read more about it here
Current Async Storage's size is set to 6MB. Going over this limit causes database or disk is full error. This 6MB limit is a sane limit to protect the user from the app storing too much data in the database. This also protects the database from filling up the disk cache and becoming malformed (endTransaction() calls will throw an exception, not rollback, and leave the db malformed). You have to be aware of that risk when increasing the database size. We recommend to ensure that your app does not write more data to AsyncStorage than space is left on disk. Since AsyncStorage is based on SQLite on Android you also have to be aware of the SQLite limits.
If you still wish to increase the storage capability the you can add the following to your android/gradle.properties
AsyncStorage_db_size_in_MB=10
This will set the size of to 10MB instead of the default 6MB.
Original Answer
If you use the default storage settings for redux-persist in react-native it will use AsyncStorage
AsyncStorage has some limitations in the amount it can store depending on the operating system that you are using.
In Android if we look at native code behind AsyncStorage we can see that the upper limit is just 6MB
In iOS there are no such limits on the amount of storage that can be used. You can see this SO answer for a further discussion.
If you don't want to use AsnycStorage there are alternatives like redux-persist-fs-storage or redux-persist-filesystem-storage which get around the 6MB limitation on Android.
There's no expiration or storage details listed for image serving URLs on Google's Cloud platform:
https://cloud.google.com/appengine/docs/go/images/reference#ServingURL — This documentation is quite unclear. Is the image stored temporarily on a CDN or something, or is it stored in the Blobstore of the project indefinitely and we're paying for many multiples of storage? Or, does the URL expire after a set amount of time and that size of the image is discarded?
The reason I'm asking is because I've heard that calls to this function add latency, and I wanted to be sure to cache the response if possible and if this was the case. However, I need to know the cache expiration point, if ever, if so.
Any help or clarification would be appreciated.
Pricing and Caching
That is described a little better here:
You simply store a single copy of your original image in Blobstore, and then request a high-performance per-image URL.
As Paul said in his comment, you only pay for the storage space of 1 copy of the original image in the Blobstore, plus normal bandwidth charges when it is served. When you create URLs that serve the image at different sizes, it is up to Google whether they cache a copy of the image at that size or not; either way you will still only pay for the storage of the original image at the original size.
I have seen reports that serving URLs can work for days after deleting the original image, so obviously Google does some caching at least sometimes, but those details are not specified and could change case-by-case.
Expiration
The URL will never expire, unless you delete it or the original image explicitly.
Whether you store your images in Cloud Storage or Blobstore, the right way to stop an image from being publicly accessible through the serving URL is to call the image.DeleteServingURL function.
Performance
I cannot comment on how much latency could be added by serving a resized copy of an image. I assume the answer is "not enough to care about", but again, I don't know. If you experiment and find the added latency unacceptable, you could try creating multiple versions of the image to store in Blobstore yourself, to serve at their natural sizes. I cannot say whether that would actually increase performance or not. You would of course pay for storing each copy in that case. I suggest not worrying about that unless you see it becomes a problem.
Images are served with low latency from a highly optimized, cookieless infrastructure.
So I doubt you could gain much benefit from trying to optimize it more yourself.
I am looking at making a LOB html5 web application. The main part we are trying to accomplish is to make the application offline capable. This will mean taking a large chunk of SQL data from the server and storing it in the browser. This will need to be living in the browser for quite a while, dont want to have to continuously refresh it everytime the browser is closed and reopened.
We are looking at storing the data inside the client in indexedDB but I read that indexedDB is stored in temporary storage so the lifetime of it cannot be relied on. Does anyone know of any strategies on prolonging its lifetime? Also we will be pulling down massive chunks of data so 1-5mb storage might not suffice what we require.
My current thought is to somewhat store it down to the browser storage using html5 storage API's and hydrate it into the indexedDb as it's required. Just need to make sure we can grow the storage limit to whatever we need.
Any advice on how we approach this?
We are looking at storing the data inside the client in indexedDB but I read that indexedDB is stored in temporary storage so the lifetime of it cannot be relied on.
That is technically true but in practice I've never seen the browser actually delete data. More common if you're storing a lot of data, you will hit quota limits which are annoying and sometimes inconsistent/buggy.
Regardless, you shouldn't rely on data in IndexedDB always being there forever, because users can always delete data, have their computers break without backups, etc.
If you create a Chrome extension, you can enable unlimited storage. I've successfully stored several thousand large text documents persistently in indexedDB using this approach.
This might be a silly question to add, but can you access the storage area outside of the browser? For instance, if I did not want to have a huge lag when my app starts up and loads a bunch of data, could I make an external app to "refresh" the local data so that when the browser starts, it is ready to rock and roll?
I assume the answer here will be no, but I had to ask.
Has anyone worked around this for large data sets? For instance loading in one tab and working in another? Chrome extension to load, but access via the app?
I am about to port a Windows Form app to WPF or Silverlight. The current application uses a cache to store SQL responses temporaily as well as for later use in order not to have to run the queries again. The local cache should be able to handle 1 to 4 GB.
1) Is the Internal Storage capable to handle this amount of data? A search has not given me a clear answer so far, many talk about a 1MB limit, some say storage is of size long.
2) SQLite has C# managed code port, but I am not sure if that is stable enough to use in a professional application. Any experience or opinion?
3) Is it possible to use the SQLite ADO.Net provider for the Isolated storage or would it be an idea to run a local server that is responsible for the cache only? Or any way to achieve this through the COM access?
4) Any file based db system that you can recommend as a substitute for SQLite in case nothing else does work?
Any other ideas welcome, I need the local cache. If not, I need to do the application in Silverlight AND WPF and I would like to avoid that.
Thanks!
Regarding your 1 question:
Is the Internal Storage capable to handle this amount of data? A
search has not given me a clear answer so far, many talk about a 1MB
limit, some say storage is of size long.
Basically, by default Silverlight apps are granted 1 Mb of storage but they can request an increase in its storage quota (see here and here for more details).
Hope this helps
I see in many cases memcached is used. Can you give examples when to avoid memcached other than large files? How large files are appropriate for memcached?
Thanks for your answer
If you know when to bust the caches to prevent out-of-date things from being cached, there's not really a reason to avoid memcache for anything small unless it's so trivial to compute that it'd be approximately as long to hit memcache as it would to just compute it.
I have seen Memcached is used to store session data.As my point of view its not recommended storing sessions in Memcached.If a session disappears, often the user is logged out,If a portion of a cache disappears or either due to a hardware crash it should not cause your users noticable pain.According to the memcached site, “memcached is a high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load.” So while developing your application, remember that you must have a fall-back mechanism to retrieve the data once it is not found in the Memcached server. Here are some tips,
Avoid storing frequently updated data in Memcached.
Memcached does not ship with in-built security mechanisms. So It is your responsibility to handle security issues.
Try to maintain predefined fixed number of connections in the connection pool because each set/get operation is a new atomic connection to the Memcached server.
Avoid storing raw data coming straight from the database rather than storing processed data