Recursive timed XHR to fill a collection - backbone.js

For one of my backbone project (in which i cannot implement REST/sync), i need to refresh a backbone collection (using backbone relational as model, if it matters ?) every X seconds.
What i've been doing is implement a function like this :
refresh: function(){
var self = this;
// clears timeout
self.timeoutRefresh && clearTimeout(self.timeoutRefresh);
// aborts request if running
self.xhrRefresh && self.xhrRefresh.abort();
// do request
self.xhrRefresh = self.options.myfunction.call(self, {}, function (data) {
self.mycollection.set(data);
// call it again in 5 seconds
self.timeoutRefresh = _.delay(function(){
self.refresh.call(self);
}, 5 * 1000);
});
},
The problem is that this block of code seems to be guilty of a big memory leak in my application.
Can it be a closure problem with the self variable ?
Should i then do the recursive call this way ?
self.timeoutRefresh = _.delay(function(context){
context.refresh.call(context);
}, 5 * 1000, self);
If not, where does it come from ?

Well after a lot of chrome debugging, i figured out backbone relational was never actually replacing my objects cause it is doing comparison over the id attribute which my objects didn't have (physically and logically).
I ended up calculating a md5 hash from the different meaningful properties of my object and using it as id so backbone relational would know it shouldn't consider the object as a new one.
Conclusion, it's not a closure problem as chrome garbage collector does its thing well.
NB: Backbone debugger helped me a lot finding where the problem came from.

Related

angularjs chain http post sequentially

In my application, I am storing data in local storage and trigger async http post in the background. Once successfully posted, the posted data gets removed from local storage. When http post is in process, there may be more data added to local storage so I need to queue up the post and sequentially process it because, I need to wait for the local storage to be cleared from the successful posts. The task should be called recursively until there is data in local storage.
taskQueue: function () {
var deferred = $q.defer();
queue.push(deferred);
var promise = deferred.promise;
if (!saveInProgress) {
// get post data from storage
var promises = queue.map(function(){$http.post(<post url>, <post data>).then(function(result){
// clear successful data
deferred.resolve(result);
}, function(error){
deferred.reject(error);
})
})
return $q.all(promises);
}
As angular newbie, I am having problems with the above code not being sequential. How can I achieve what I intend to? The queue length is unknown and also the queue length increases as the process is in progress. Please point me to other answers if this is a duplicate.
Async.js sounds a good idea but if you don't want to use a library ...
$q.all will batch up an array of requests and fire them at the same time and will resolve when ALL promises in the array resolve - WHICH IS NOT WHAT YOU WANT.
to make $http calls SEQUENTIALLY from an array do this ....
var request0 = $http.get('/my/path0');
var request1 = $http.post('/my/path1', {data:'fred'});
var request2 = $http.get('/my/path2');
var requestArray = [];
then ...
requestArray.push(request0);
requestArray.push(request1);
requestArray.push(request2);
then ...
requestArray[0].then(function(response0) {
// do something with response0
return requestArray[1];
}).then(function(response1) {
// do something with response1
return requestArray[2];
}).then(function(response2) {
// do something with response2
}).catch(function(failedResponse) {
console.log("i will be displayed when a request fails (if ever)", failedResponse)
});
While having a library solution would be great (per #nstoitsev's answer), you can do this without it.
sequential requests of unknown length
Just to recap:
we do not know the number of requests
each response may enqueue another request
A few assumptions:
all requests will be working on a common data object (local storage in your case)
all requests are promises
running the queue
function postMyData (data){
return $http.post(<url>, data)
}
var rqsts = []
function executeQueue (){
if(!rqsts.length)
//we're done
return
var rqst = rqsts.shift()
rqst()
.then(function(rsp){
//based on how you're determining if you need to do another request...
if(keepGoing)
rqsts.push(postMyData(<more data>))
})
}
codepen - http://codepen.io/jusopi/pen/VaYRXR?editors=1010
I intentionally left this vague because I don't understand what the conditions for failure are and if you wanted to vary up the requests to use more than the same $http.post call, you could pass it back in some way.
and just a suggestion
As angular newbie...
Many things are progressing towards this whole functional, reactive programming paradigm. Since you're relatively new to Angular and NG2 already has some of this built in, it might be worthy of your attention. I think rxjs is already in many NG2 example bundles.
The easies way to achieve this is by using Async.js. There you can find a method called mapSeries. You can run it over the queue and it will sequentially process all elements of the array one by one, and will continue to the next element only when the correct callback is called.

Filter cached REST-Data vs multiple REST-calls

I'm building an Angular Shop-Frontend which consumes a REST-API with Restangular.
To get the articles from the API, I use Restangular.all("articles") and I setup Restangular to cache this request.
When I want to get one article from the API, for example on the article-detail page by it's linkname and later somewhere else (on the cart-summary) by it's id, I would need 3 REST-calls:
/api/articles
/api/articles?linkname=some_article
/api/articles/5
But actually, the data from the two later calls is already available from the cached first call.
So instead I thought about using the cached articles and filter them to save the additional REST-calls.
I built these functions into my ArticleService and it works as expected:
function getOne(articleId) {
var article = $q.defer();
restangular.all("articles").getList().then(function(articles) {
var filtered = $filter('filter')(wines, {id: articleId}, true);
article.resolve((filtered.length == 1) ? filtered[0] : null);
});
return article.promise;
}
function getOneByLinkname(linkname) {
var article = $q.defer();
restangular.all("articles").getList().then(function(articles) {
var filtered = $filter('filter')(articles, {linkname: linkname}, true);
article.resolve((filtered.length == 1) ? filtered[0] : null);
});
return article.promise;
}
My questions concerning this approach:
Are there any downsides I don't see right now? What would be the correct way to go? Is my approach legitimate, to have as little REST-calls as possible?
Thanks for your help.
Are there any downsides I don't see right now?
Depends on how the functionality of your application. If it requires real time data, then having REST calls performed to obtain the latest data would be a requirement.
What would be the correct way to go? Is my approach legitimate, to have as little REST-calls as possible?
Depends still. If you want, you can explore push data notifications, such that when your data from the server is changed or modified, you could push those info to your client. That way, the REST operations happens based on conditions you would have defined.

rx.js catchup subscription from two sources

I need to combine a catch up and a subscribe to new feed. So first I query the database for all new records I've missed, then switch to a pub sub for all new records that are coming in.
The first part is easy do your query, perhaps in batches of 500, that will give you an array and you can rx.observeFrom that.
The second part is easy you just put an rx.observe on the pubsub.
But I need to do is sequentially so I need to play all the old records before I start playing the new ones coming in.
I figure I can start the subscribe to pubsub, put those in an array, then start processing the old ones, and when I'm done either remove the dups ( or since I do a dup check ) allow the few dups, but play the accumulated records until they are gone and then one in one out.
my question is what is the best way to do this? should I create a subscribe to start building up new records in an array, then start processing old, then in the "then" of the oldrecord process subscribe to the other array?
Ok this is what I have so far. I need to build up the tests and finish up some psudo code to find out if it even works, much less is a good implementation. Feel free to stop me in my tracks before I bury myself.
var catchUpSubscription = function catchUpSubscription(startFrom) {
EventEmitter.call(this);
var subscription = this.getCurrentEventsSubscription();
// calling map to start subscription and catch in an array.
// not sure if this is right
var events = rx.Observable.fromEvent(subscription, 'event').map(x=> x);
// getPastEvents gets batches of 500 iterates over and emits each
// till no more are returned, then resolves a promise
this.getPastEvents({count:500, start:startFrom})
.then(function(){
rx.Observable.fromArray(events).forEach(x=> emit('event', x));
});
};
I don't know that this is the best way. Any thoughts?
thx
I would avoid mixing your different async strategies unnecessarily. You can use concat to join together the two sequences:
var catchUpSubscription = function catchUpSubscription(startFrom) {
var subscription = this.getCurrentEventsSubscription();
return Rx.Observable.fromPromise(this.getPastEvents({count:500, start:startFrom}))
.flatMap(x => x)
.concat(Rx.Observable.fromEvent(subscription, 'event'));
};
///Sometime later
catchUpSubscription(startTime).subscribe(x => /*handle event*/)

I can't seem to find a way to fetch a specific record from Firebase

This is a simple service I've built for Firebase for an application. I've had to jury-rig some elements and I've made absolutely sure I am using the latest versions of firebase and angular fire since it seems to be changing pretty fast. These first few lines are pretty straightforward,
app.factory('Ship', function ($routeParams, $firebase, FIREBASE_URL) {
var ref = new Firebase(FIREBASE_URL + 'ships');
The problems begin here. Depending on what I intend to do with the firebase object, at times it needs to be $asObject, at other times not. It depends on the tutorial and the most recent ones would seem to indicate that
var shipsObj = $firebase(ref).$asObject(); // Is this necessary
var ships = $firebase(ref); // in the most modern version?
var Ship = {
all: shipsObj, // This works fine
create: function (ship) {
return shipsObj.$add(ship); // This also works fine
},
find: function (shipId) {
console.log($routeParams.shipId); // <--this ID appears as the correct numerical ID
Then, there is the next six lines, NONE of which work. They all produce an error indicating that they are undefined.
console.log(shipsObj.$child(shipId));
console.log(ships.$child(shipId));
console.log(shipsObj.$getRecord(shipId));
console.log(ships.$getRecord(shipId));
console.log(ships.$keyAt(shipId));
console.log(shipsObj.$keyAt(shipId));
},
I won't bore you with repeating the next method multiple times as well, but $remove isn't working either.
delete: function (shipId) {
return ships.$remove(shipId);
}
};
return Ship;
});
Assuming your using v0.8 of AngularFire you'll want to use $asObject() or $asArray() to get at the actual data. Here's the official blog post that discusses the changes in v0.8: https://www.firebase.com/blog/2014-07-30-introducing-angularfire-08.html
So to access a ship by its id you could do:
var shipsObj = $firebase(ref).$asObject();
console.log(shipsObj[shipId]);
You may also want to take a look at the API docs for AngularFire: https://www.firebase.com/docs/web/bindings/angular/api.html
A lot changed in v0.8 and it just came out (July 2014) so if you're basing your code on anything older than that then it probably won't work

Indexeddb: Differences between onsuccess and oncomplete?

I use two different events for the callback to respond when the IndexedDB transaction finishes or is successful:
Let's say... db : IDBDatabase object, tr : IDBTransaction object, os : IDBObjectStore object
tr = db.transaction(os_name,'readwrite');
os = tr.objectStore();
case 1 :
r = os.openCursor();
r.onsuccess = function(){
if(r.result){
callback_for_result_fetched();
r.result.continue;
}else callback_for_transaction_finish();
}
case 2:
tr.oncomplete = callback_for_transaction_finish();
It is a waste if both of them work similarly. So can you tell me, is there any difference between them?
Sorry for raising up quite an old thread, but it's questioning is a good starting point...
I've looked for a similar question but in a bit different use case and actually found no good answers or even a misleading ones.
Think of a use case when you need to make several writes into the objectStore of even into several ones. You definitely don't want to manage each single write and it's own success and error events. That is the meaning of transaction and this is the (proper) implementation of it for indexedDB:
var trx = dbInstance.transaction([storeIdA, storeIdB], 'readwrite'),
storeA = trx.objectStore(storeIdA),
storeB = trx.objectStore(storeIdB);
trx.oncomplete = function(event) {
// this code will run only when ALL of the following requests are succeed
// and only AFTER ALL of them were processed
};
trx.onerror = function(error) {
// this code will run if ANY of the following requests will fail
// and only AFTER ALL of them were processed
};
storeA.put({ key:keyA, value:valueA });
storeA.put({ key:keyB, value:valueB });
storeB.put({ key:keyA, value:valueA });
storeB.put({ key:keyB, value:valueB });
Clue to this understanding is to be found in the following statement of W3C spec:
To determine if a transaction has completed successfully, listen to the transaction’s complete event rather than the success event of a particular request, because the transaction may still fail after the success event fires.
While it's true these callbacks function similarly they are not the same: the difference between onsuccess and oncomplete is that transactions complete but requests, which are made on those transactions, are successful.
oncomplete is only defined in the spec as related to a transaction. A transaction doesn't have an onsuccess callback.
I would only caution that there is no garentee that getting a successful trx.oncomplete means the data was written to the disk/database:
We are seeing a problem with trx.oncomplete where the data is not being written to the db on disk. FireFox has an explanation of what they did that is causing this problem here: https://developer.mozilla.org/en-US/docs/Web/API/IDBTransaction/oncomplete
It seems that windows/edge is also having the same issue. Basically, there is no guarantee that your app will have data written to the database, if/when the user decides to kill or power down the device. We've even tried waiting up to 15 minutes before shutting down in some cases and haven't seen the data written. For me I'd always want to ensure that a data write completes and is committed.
Are there other solutions for a real persistent database, or enhancements to the IndexedDB beyond FF experimental add...

Resources