Use specific CacheProvider for a Dynamic Data Store - episerver

We have some problems due to caching in EPiServer dynamic data stores, as different servers keep old data around. We need to ensure that when we fetch and save some items to/from the database, these items are not cached, but retrieved and saved directly from/to the database.
The problem for us is: how do we configure this on a per-store basis? When following the directions on EpiServer for configuring caching for stores (and here), it seems to apply to all stores, not just one.
We would like to set a NullCacheProvider for a store, not all stores.
This example shows how to disable the cache using code, but it sets the static CacheProvider.Instance, which I would assume is used by all episerver stores.

Related

Flutter: Shared Preference or Scoped Model for speed

I will be storing many small data strings in both scoped model and shared preferences. My question is, in order to retrieve this data back are there any significant speed differences in retrieving this data from either of these sources?
Since I will be doing many "sets" and "gets" I would like to know if anybody has seen any differences in performance using one more than another.
I understand Shared preferences is persistent and scoped model is not however after the app is loaded, the data is synced and I would rather access the data from the fastest source.
Firstly, understand that they are not alternatives. You will likely want to back certain parts of your model using shared preferences and this can be done behind scoped model (or BLoC etc). Note that simply updating a shared preference will not trigger a rebuild, which is why you should use one of the shared state patterns and then have that update those items it wants to persist to shared preferences.
Shared preferences is actually implemented as an in memory map that triggers a background write to storage on each update. So 'reads' from shared preferences are inexpensive.

Send sensor data to hazelcast database and link it to development tools to design dashboards

Can we use hazel-cast database to link and design the data according to tracker with bar graph, below are the points which I need to confirm to build the application for hardware:
- I am using temperature sensor interfacing with Arduino Yun and wanted to upload the data given by temperature sensor on hazel-cast server.
By using single database output uploaded in hazelcast server, reads the data through database through Arduino MKR1000.
Link the data to different development tools to design different types of dashboards like Pie chart, Bar chart, Line chart etc.
Please suggest how the best way to link to create the database in data-grid
How you want to use data on your dashboard will basically depend on how you have modelled your data - one map or multiple maps etc. Then you can retrieve data through single key-based lookups or by running queries and use that for your dashboard. You can define the lifetime of the data - be it few minutes or hours or days. See eviction: http://docs.hazelcast.org/docs/3.10.1/manual/html-single/index.html#map-eviction
If you decide to use a visualisation tool for dashboard that can use JMX then you can latch on to Hazelcast exposed JMX beans that would give you information about data stored in the cluster and lot more. Check out this: http://docs.hazelcast.org/docs/3.10.1/manual/html-single/index.html#monitoring-with-jmx
You can configure Hazelcast to use a MapLoader - MapStore to persist the cached data to any back-end persistence mechanism – relational or no-sql databases may be good choices.
On your first point, I wouldn’t expect anything running on the Arduino to update the database directly, but the MKR1000 is going to get you connectivity, so you can use Kafka/MQTT/… - take a look at https://blog.hazelcast.com/hazelcast-backbone-iot-internet-things/.
If you choose this route, you’d set up a database that is accessible to all cluster members, create the MapLoader/MapStore class (see the example code, for help) and configure the cluster to read/write.
Once the data is in the cluster, access is easy and you can use a dashboard tool of your choice to present the data.
(edit) - to your question about presenting historical data on your dashboard:
Rahul’s blog post describes a very cool implementation of near/real-time data management in a Hazelcast RingBuffer. In that post, I think he mentioned collecting data every second and buffering two minutes worth.
The ring buffer has a configured capacity, but note that he is over-writing, on add - this is kind of a given for real-time systems; given the choice is losing older data or crashing.
For a generalized query-tool approach, I think you’d augment this. Off the top of my head, I could see using the ring-buffer in conjunction with a distributed map. You could (but wouldn’t need to) populate the map, using an map-event interceptor to populate the ring buffer. That should leave the existing functionality intact. The map, though, would allow you to configure a map-store/map-loader, so that your data is saved in a backing store. The map would support queries - but keep in mind that IMDG queries do not read through to the backing store.
This would give you flexibility, at the cost of some complexity. The real-time data in the ring buffer would be always available, quickly and easily. Data returned from querying the map would be very quick, too. For ‘historical’ data, you can query your backing-store - which is slower, but will probably have relatively great storage capacity. The trick here is to know when to query each. The most recent data is a given, with it’s fixed capacity. You need to know how much is in the cluster - i.e. how far back your in-memory history goes. I think it best to configure the expiry to a useful limit and provision the storage so that data leaves the map by expiration - not eviction. In this way, you can know what the beginning of the in-memory history is. Monitoring eviction events would tell you that your cluster has a complete view of data back to a known time.
 

Restricting data in PouchDB

I have an offline ready application that I am currently building in electron.
The core requirements are that all data is restricted (have to be a user to read or write) and that within that data some data is further restricted to a user, (account information, messages, etc...)
Now I do not want to replicate any data offline that a user should not have access to (this is because all the data can be seen using the devtools regardless of restriction) so essentially I only want to sync data to PouchDB's offline store if that user has access to it as well as all the data all users have access to.
Now I have read the following posts/guides but I am still a little confused.
https://pouchdb.com/2015/04/05/filtered-replication.html
https://www.joshmorony.com/creating-a-multiple-user-app-with-pouchdb-couchdb/
Restricting Access to local PouchDB
From my understanding filtering is a bad choice performance wise even though it could do what I want.
Setting up a proxy would work but it then essentially becomes a REST api and the data synchronization falls apart.
And the final option which I think is what I want is to have a database for every user that would contain their private information and then additional databases to hold the information that is available to every user.
The only real question I have with this approach is how is data handled that is private but shared between two users (messages, etc...)
I am more after an overarching view of how the data should be stored as opposed to code examples, just really struggling with the conceptual architecture of the application.
There are many solutions to your problem. One solution looks very promising: IBM Cloudant has started work on Cloudant Envoy, a proxy simulating the CouchDB interface instead of a simple REST API. You can read more about it on the site for Envoy over at ibm.com. A custom replicator for PouchDB is also available on Github.
There's also a blog post on Medium.com on this.
The idea is the same as the much older Couchbase Sync Gateway. Although Couchbase has common roots with CouchDB, I have not tracked if they still support replication with CouchDB.
The easiest way to start would be to create a single database per user on the server, and a common database that you just pull the shared data from. Let me know if you need more info on this solution.

Any idea about keeping data locally with ionic framework

I am using ionic framework for my mobile app and I would like to make some functions like when user requests data (json) from database (using REST API) and it will keep on local storage on the device. whenever the user come back, application will use the data from local storage instead.
I read there are many options to do this ($localstorage, sqlite) but I'm not sure which on is better in terms of performance and easy coding.
The data is text only and I would be around 2,000 rows per one query.
For performance, I would suggest going with Sqlite and also, your data will be securely stored in your app.
You can use localStorage for temporary data which is not very important as the localStorage data can be deleted also due to activities of the device's internet browser.
With regards to performance, I suggested sqlite as sqlite does not block the DOM or your view while performing query on it but getting data out of the storage takes few milliseconds more than localStorage, whereas, localStorage completely blocks the DOM when being queried but is a little faster (very minor) than sqlite in fetching the data.
Localstorage: its data will not be stored permanently. data stored in localstorage is unreliable. It is possible for the data to be wiped due to various reasons. Besides, it has the storage limit of 5 to 10 MB! Though It is better in terms of performance when compared to the below option.
pouchDB: If you have installed the SQLite Cordova Plugin then pouchDB will automatically use SQLite. It's a full database which overcomes both the above limitations by storing data on a specific location in the device's storage. You get unlimited storage. Apart from that, it can handle complex queries easily whereas localstorage is a simple key-value system. Since you are installing the Cordova Plugin plugin, it makes sure that you have full cross-platform support.
Best option will be to store data into sqlite db because the data you want to store is quite large. The reason is quite simple -CRUD operations require easy coding and the performance is great. Moreover when you plan your architecture you think of all possible expansions whereas local storage can only store limited amount of data.

How ofter fetch data from server

I'm pretty new with angularjs and I'm wriring my first application.
I'd like to know if there is a specific best practice about how often I should pull data from the server when I have to deal with big dataset. Is it better to pull one big JSON dataset and make a single call to the server, or it's advisable to fetch small bunch of data with multiple requests?
I try to explain. My application is now fetching from the server all the JSON data required by the application when the main page loads. It's a lot of stuff (about 3MB). Then it never fetches any other data, I can apply filters to the data and sorting it, all it's done client-side with no interaction with the server.
Now, is it worth to fetch few data at the beginning and then, based on the applied filters, re-fetch the data from the server?
Thanks!
It all depends on the specific requirements and usage patterns. If you are worried on quick load times, there are patterns similar to the ones used with jQuery.dataTables, which allow for super-quick loading of data, relying on server-side filtering.
If you have good cacheability (data is the same for all users) and no worries for long load times, go for the eager load (and use a filesystem-based cache with nginx serving the cached data).
In general, having local copies of the whole database is only useful when you can't do the work server-side, as RDBMS are much better at data analysis than any javascript implementation.

Resources