JSONStore Worklight - Size Limit - mobile

JSONStore provides us with a great way to sync data with a server and track changes a user makes while offline. Is there a limit as to how much information could be saved on JSONStore? I found that Webkit database has a limit of 5 MB where as SQLLite database has no limit. Also wondering where JSONStore uses WebKit database or SQLLite to store its underlying information

JSONStore ultimately stores information on the file system. The only bounds would be the space remaining on the device or any file size limits imposed the the devices operating system. We have easily created JSONStore instances that were hundreds and hundres of MB on disk.

Related

Infinispan data persistence sharding?

Capedwarf uses Infinispan to store data, in what way can Infinispan be configured to persist data on a node machine with maximum disk space, for example, each server hosting Capedwarf only have 1TB of mounted block storage, how do you configure Infispan such that if the "overall" data exceeds 1TB it would be "sharded" across different server?
Running Capedwaft it stores infinispan data in: $\CapeDwarf_WildFly_2.0.0.Final\standalone\data\infinispan\capedwarf
When using local storage (single file store, soft index store or rocksdb store) in combination with a distributed cache, the data is already "sharded" based on ownership: each node will approximately store TOTAL DATA / NUM_NODES * NUM_OWNERS.
For example, store 1GB of data on a 5-node cluster in a distributed cache with 2 owners (the default), each node would require approximately 400MB. As the data is not perfectly balanced, allow for a certain margin of difference (typically 10-15%).
Alternatively you can use a shared store (jdbc, cloud) which would store data externally.

What is the maximum size for redux persist?

I have a medium sized objects thats maximum size is 10mb .
What is the maximum size to store and persist data in redux using redux-persist ?
Also, What is the best approach for such sized data , to use redux-persist or use a database like realm ?
Update
When this post was originally written it was not possible to set the size of AsyncStorage. However in May 2019 the following commit changed that.
You can read more about it here
Current Async Storage's size is set to 6MB. Going over this limit causes database or disk is full error. This 6MB limit is a sane limit to protect the user from the app storing too much data in the database. This also protects the database from filling up the disk cache and becoming malformed (endTransaction() calls will throw an exception, not rollback, and leave the db malformed). You have to be aware of that risk when increasing the database size. We recommend to ensure that your app does not write more data to AsyncStorage than space is left on disk. Since AsyncStorage is based on SQLite on Android you also have to be aware of the SQLite limits.
If you still wish to increase the storage capability the you can add the following to your android/gradle.properties
AsyncStorage_db_size_in_MB=10
This will set the size of to 10MB instead of the default 6MB.
Original Answer
If you use the default storage settings for redux-persist in react-native it will use AsyncStorage
AsyncStorage has some limitations in the amount it can store depending on the operating system that you are using.
In Android if we look at native code behind AsyncStorage we can see that the upper limit is just 6MB
In iOS there are no such limits on the amount of storage that can be used. You can see this SO answer for a further discussion.
If you don't want to use AsnycStorage there are alternatives like redux-persist-fs-storage or redux-persist-filesystem-storage which get around the 6MB limitation on Android.

How are sparse files handled in Google Cloud Storage?

We have a 200GB sparse file which is about 80GB in actual size (VMware disk).
How does Google calculate the space for this file, 200GB or 80GB?
What would be the best practice to store it in the Google Cloud using gsutil (similar to rsync -S)
Would it be solved by using tar cSf, and then upload via gsutil? How slow could it be?
We have a 200GB sparse file which is about 80GB in actual size (VMware disk).
How does Google calculate the space for this file, 200GB or 80GB?
Google Cloud Storage does not introspect your files to understand what they are, so it's the actual size (80GB) that it takes on disk that matters.
What would be the best practice to store it in the Google Cloud using gsutil (similar to rsync -S)
There's gsutil rsync but it does not support -S so that won't be very efficient. Also, Google Cloud Storage is not storing files as blocks which can be accessed and rewritten randomly, but as blobs keyed by the bucket name + object name, so you'll essentially be uploading the entire 80GB file every time.
One alternative you might consider is to use Persistent Disks which provide block-level access to your files with the following workflow:
One-time setup:
create a persistent disk and use it only for storage of your VM image
Pre-sync setup:
create a Linux VM instance with its own boot disk
attach the persistent disk in read-write mode to the instance
mount the attached disk as a file system
Synchronize:
use ssh+rsync to synchronize your VM image to the persistent disk on the VM
Post-sync teardown:
unmount the disk within the instance
detach the persistent disk from the instance
delete the VM instance
You can automate the setup and teardown steps with scripts so it should be very easy to run on a regular basis whenever you want to do the synchronization.
Would it be solved by using tar cSf, and then upload via gsutil? How slow could it be?
The method above will be limited by your network connection, and would be no different from ssh+rsync to any other server. You can test it out by, say, throttling your bandwidth artificially to another server on your own network to match your external upload speed and running rsync over ssh to test it out.
Something not covered above is pricing, so I'll just leave these pointers for you to consider that as well, as that may be relevant for you to consider in your analysis.
Using Google Cloud Storage mode, you'll incur:
Google Cloud Storage pricing: currently $0.026 / GB / month
Network egress (ingress is free): varies by total amount of data
Using the Persistent Disk approach, you'll incur:
Persistent Disk pricing: currently $0.04 / GB / month
VM instance: needs to be up only while you're running the sync
Network egress (ingress is free): varies by total amount of data
The actual amount of data you will download should be small, since that's what rsync is supposed to minimize, so most of the data should be uploaded rather than downloaded, and hence your network cost should be low, but that is based on the actual rsync implementation which I cannot speak for.
Hope this helps.

Silverlight with Isolated Storage - Port from Windows Forms with SQLite

I am about to port a Windows Form app to WPF or Silverlight. The current application uses a cache to store SQL responses temporaily as well as for later use in order not to have to run the queries again. The local cache should be able to handle 1 to 4 GB.
1) Is the Internal Storage capable to handle this amount of data? A search has not given me a clear answer so far, many talk about a 1MB limit, some say storage is of size long.
2) SQLite has C# managed code port, but I am not sure if that is stable enough to use in a professional application. Any experience or opinion?
3) Is it possible to use the SQLite ADO.Net provider for the Isolated storage or would it be an idea to run a local server that is responsible for the cache only? Or any way to achieve this through the COM access?
4) Any file based db system that you can recommend as a substitute for SQLite in case nothing else does work?
Any other ideas welcome, I need the local cache. If not, I need to do the application in Silverlight AND WPF and I would like to avoid that.
Thanks!
Regarding your 1 question:
Is the Internal Storage capable to handle this amount of data? A
search has not given me a clear answer so far, many talk about a 1MB
limit, some say storage is of size long.
Basically, by default Silverlight apps are granted 1 Mb of storage but they can request an increase in its storage quota (see here and here for more details).
Hope this helps

app engine - how can i increase the datastore item size limit

how can i increase the datastore item size limit, which is now only 1 MB, in app engine?
if i buy more storage what will happen to this limit?
thanks
Not exactly. Enabling billing won't remove the 1MB entity size limit, however it will allow you to use the Blobstore API, where you can store blobs up to 2GB in size. These are typically uploaded files, but could just as easily be pickled python objects.
While each entity is limited to 1MB, HTTP requests and responses can be up to 10MB, so you can handle slightly larger files by spanning the blob contents across multiple entities, splitting the file on upload and stitching it back together on download.
You can use the Blobstore API to handles objects that are larger than 1 Mb.
I dont know what limit 1MB you exactly talking about but for GAE if you want to do anything above the free quota, enable billing for your application :)

Resources