My app uses PouchDB (and Ionic 1) to replicate it's data from a local DB to a server DB as soon as the network is available (through a live&retry replication).
I would like to display on the screen the number of changes waiting for replication (including 0 when everything has been replicated).
Is there some way to do that with PouchDB?
(If this is not feasible, a fallback solution would be to have a "dirty" flag, meaning that everything is replicated or not. Any idea for this?)
Thanks in advance!
Here is the way I did it (it's only the 'fallback' solution):
PouchDB.replicate(localDb, remoteDb, options).
on('paused', function (info) {
if (info == undefined) {
// the replication has finished and is waiting for other changes
$rootScope.syncStatus = "pristine";
}
else { // the are some pending changes to be replicated remotely
$rootScope.syncStatus = "dirty";
}
})
Related
I'm building an Angular Shop-Frontend which consumes a REST-API with Restangular.
To get the articles from the API, I use Restangular.all("articles") and I setup Restangular to cache this request.
When I want to get one article from the API, for example on the article-detail page by it's linkname and later somewhere else (on the cart-summary) by it's id, I would need 3 REST-calls:
/api/articles
/api/articles?linkname=some_article
/api/articles/5
But actually, the data from the two later calls is already available from the cached first call.
So instead I thought about using the cached articles and filter them to save the additional REST-calls.
I built these functions into my ArticleService and it works as expected:
function getOne(articleId) {
var article = $q.defer();
restangular.all("articles").getList().then(function(articles) {
var filtered = $filter('filter')(wines, {id: articleId}, true);
article.resolve((filtered.length == 1) ? filtered[0] : null);
});
return article.promise;
}
function getOneByLinkname(linkname) {
var article = $q.defer();
restangular.all("articles").getList().then(function(articles) {
var filtered = $filter('filter')(articles, {linkname: linkname}, true);
article.resolve((filtered.length == 1) ? filtered[0] : null);
});
return article.promise;
}
My questions concerning this approach:
Are there any downsides I don't see right now? What would be the correct way to go? Is my approach legitimate, to have as little REST-calls as possible?
Thanks for your help.
Are there any downsides I don't see right now?
Depends on how the functionality of your application. If it requires real time data, then having REST calls performed to obtain the latest data would be a requirement.
What would be the correct way to go? Is my approach legitimate, to have as little REST-calls as possible?
Depends still. If you want, you can explore push data notifications, such that when your data from the server is changed or modified, you could push those info to your client. That way, the REST operations happens based on conditions you would have defined.
Current application - Angular application with Breeze. Application has ~7 entity managers and different data domains (metadata). When application runs we trying to fetch entity managers, like:
app.run(['$rootScope', 'datacontext1', ... ], function($rootScope, datacontext1, ...) {
datacontext1.loadMetadata();
...
datacontext7.loadMetadata();
}
Every datacontext has its own entity manager and loadMetadata is:
function loadMetadata() {
manager.fetchMetadata().then(function(mdata) {
if (mdata === 'already fetched') {
return;
}
...
applyCustomMetadata(); // Do some custom job with metadata/entity types
});
}
Metadata comes from server asynchronously. Few module has really big metadata, like 200Kb and takes some time for loading and apply to entity manager. Its possible that first Breeze data request executed in same entity manager will be started before this loadMetadata operation finished and as I understand Breeze automatically fetch metadata again. Usually its not a problem, Metadata end point cached on server, but sometimes it produces very strange behavior of Breeze - EntityManager.fetchMetadata resolve promise as "already fetched" and in this case applyCustomMetadata() operation can not be executed.
As I understand problem is inside Breeze and approach its used to resolve metadata promise (seems to be http adapter is singleton and second request override metadata with "already fetched" string and applyCustomMetadata() operation never executes).
Need to figure out some way to resolve issue without significant changes in application.
Logically need to delay full application from using entity managers while loadMetadata done. Looking for any way on Breeze level to disable auto fetch metadata if its already in progress (but not interrupt request, just wait and try again after some time). Any other ideas are fine as well.
Why are you allowing queries to execute before the metadata is loaded? Therein lies your problem.
I have an application bootstrapper that I expose through a global variable; none of my application activities depending on the Entity Manager are started until preliminary processes complete:
var bootstrapper = {
pageReady: ko.observable(false)
};
initBootstrapper();
return bootstrapper;
function initBootstrapper() {
window.MyApp.entityManagerProvider.initialize() // load metadata, lookups, etc
.then(function () {
window.MyApp.router.initialize(); // setup page routes, home ViewModel, etc
bootstrapper.pageReady(true); // show homepage
});
};
Additionally, depending on the frequency of database changes occurring in your organization, you may wish to deliver the metadata to the client synchronously on page_load. See this documentation for further details:
http://breeze.github.io/doc-js/metadata-load-from-script.html
I am running grails 1.3.7 and using the grails database migration plugin version database-migration-1.0
The problem I have is I have a migration change set. That is pulling blobs out of a table and writing them to disk. When running through this migration though I am running out of heap space. I was thinking I would need to flush and clear the session to free up some space however I am having difficulty getting access to the session from within the migration. BTW The reason it's in a migration is we are moving away from storing files in oracle and putting them on disk
I have tried
SessionFactoryUtils.getSession(sessionFactory, true)
I have also tried
SecurityRequestHolder.request.getSession(false) //request in null -> not surprising
changeSet(author: "userone", id: "saveFilesToDisk-1") {
grailsChange{
change{
def fileIds = sql.rows("""SELECT id FROM erp_file""")
for (row in fileIds) {
def erpFile = ErpFile.get(row.id)
erpFile.writeToDisk()
session.flush()
session.clear()
propertyInstanceMap.get().clear()
}
ConfigurationHolder.config.erp.ErpFile.persistenceMode = previousMode
}
}
}
Any help would be greatly appreciated.
The application context will be automatically available in your migration as ctx. You can get the session like this:
def session = ctx.sessionFactory.currentSession
To access session, you can use withSession closure like this:
Book.withSession { session ->
session.clear()
}
But, this may not be the reason why your app run out of heap space. If the data volume is large, then
def fileIds = sql.rows("""SELECT id FROM erp_file""")
for (row in fileIds) {
..........
}
will consume up your space. Try to process the data with pagination. Don't load all the data at once.
I use two different events for the callback to respond when the IndexedDB transaction finishes or is successful:
Let's say... db : IDBDatabase object, tr : IDBTransaction object, os : IDBObjectStore object
tr = db.transaction(os_name,'readwrite');
os = tr.objectStore();
case 1 :
r = os.openCursor();
r.onsuccess = function(){
if(r.result){
callback_for_result_fetched();
r.result.continue;
}else callback_for_transaction_finish();
}
case 2:
tr.oncomplete = callback_for_transaction_finish();
It is a waste if both of them work similarly. So can you tell me, is there any difference between them?
Sorry for raising up quite an old thread, but it's questioning is a good starting point...
I've looked for a similar question but in a bit different use case and actually found no good answers or even a misleading ones.
Think of a use case when you need to make several writes into the objectStore of even into several ones. You definitely don't want to manage each single write and it's own success and error events. That is the meaning of transaction and this is the (proper) implementation of it for indexedDB:
var trx = dbInstance.transaction([storeIdA, storeIdB], 'readwrite'),
storeA = trx.objectStore(storeIdA),
storeB = trx.objectStore(storeIdB);
trx.oncomplete = function(event) {
// this code will run only when ALL of the following requests are succeed
// and only AFTER ALL of them were processed
};
trx.onerror = function(error) {
// this code will run if ANY of the following requests will fail
// and only AFTER ALL of them were processed
};
storeA.put({ key:keyA, value:valueA });
storeA.put({ key:keyB, value:valueB });
storeB.put({ key:keyA, value:valueA });
storeB.put({ key:keyB, value:valueB });
Clue to this understanding is to be found in the following statement of W3C spec:
To determine if a transaction has completed successfully, listen to the transaction’s complete event rather than the success event of a particular request, because the transaction may still fail after the success event fires.
While it's true these callbacks function similarly they are not the same: the difference between onsuccess and oncomplete is that transactions complete but requests, which are made on those transactions, are successful.
oncomplete is only defined in the spec as related to a transaction. A transaction doesn't have an onsuccess callback.
I would only caution that there is no garentee that getting a successful trx.oncomplete means the data was written to the disk/database:
We are seeing a problem with trx.oncomplete where the data is not being written to the db on disk. FireFox has an explanation of what they did that is causing this problem here: https://developer.mozilla.org/en-US/docs/Web/API/IDBTransaction/oncomplete
It seems that windows/edge is also having the same issue. Basically, there is no guarantee that your app will have data written to the database, if/when the user decides to kill or power down the device. We've even tried waiting up to 15 minutes before shutting down in some cases and haven't seen the data written. For me I'd always want to ensure that a data write completes and is committed.
Are there other solutions for a real persistent database, or enhancements to the IndexedDB beyond FF experimental add...
I want to know the relation between velocity and IS. If a request is satisfied by velocity, then will it going to use worker process. Or what happen I’m confused. ?
Also I want to store some data like country, state and city for auto suggest in velocity. This database could be on 3 gb. Now how velocity will work. And how IS will work. Is this going to effect IS. Basically my requirements is that I want to save all country, state and city data in velocity and don’t want to hit database and don’t want to make IS busy. What is the solution?
Please help
Velocity was the codename for Microsoft's AppFabric distributed caching technology. Very similar to memcache, it is used for caching objects across multiple computers.
This has no real bearing on how IIS processes requests. All requests are satisfied by IIS, AppFabric is a mechanism for storing data, not processing requests.
In answer to your second question; You can use AppFabric is a first-call check for data. If the data does not exist in the cache, call the database to populate the cache, and then return the data.
var factory = DataCacheFactory();
var cache = factory.GetCache("AutoSuggest");
List<Region> regions = cache.Get("Regions") as List<Region>;
if (regions == null) {
regions = // Get regions from database.
cache.Add("Regions", regions);
}
return regions;
Checking the cache first enables the app to get a faster response, as the database is only hit on the first instance (ideally), and the result data is pushed back into the cache.
You could wrap this up a bit more:
public T Get<T>(string cacheName, string keyName, Func<T> itemFactory)
where T : class
{
var cache = dataFactory.GetCache(cacheName);
T value = cache.Get(keyName) as T;
if (value == null) {
value = itemFactory();
cache.Add(keyName, value);
}
return value;
}
That way you can change your lookup calls to something similar to:
var regions = Get<List<Region>>("AutoSuggest", "Regions", () => GetRegions());