Backbone: Many small requests for model changes vs one collection sync? - backbone.js

What is a general good practice for when some action - changes multiple models in Backbone.js:
Trigger multiple PUT requests for each mode.save()
Single request to sync the entire collection

In case if the quantity of the changed models greater than 1 - definitely it should be the second item.

Usually, good REST api practice seems to suggest that you should update, save, create, delete single instances of persistent elements. In fact, you will find that a Backbone.Collection object does not implement these methods.
Also, if you use a standard URI scheme for your data access point, you will notice that a collection does not have a unique id.
GET /models //to get all items,
GET /models/:id //to read an element,
PUT /models/:id //to update an element,
POST /models/:id //to create an element,
DELETE /models/:id //to delete an element.
If you need to update every model of a collection on the server at once, maybe you need to ask why and there might be some re-thinking of the model structure. Maybe there should be a separate model holding that common information.
As suggested by Bart, you could implement a PATCH method to update only changed attributes of a particular element, thus saving bandwidth.

I like the first option, but I'd recommend you implement a PATCH behavior (only send updated attributes) to keep the requests as small as possible. This method gives you a more native "auto-save" feel like Google Docs. Of course, this all depends on your app and what you are doing.

Related

What's the preferred way to go about using backbone with non-crud resources?

New to backbone/marionette, but I believe that I understand how to use backbone when dealing with CRUD/REST; however, consider something like results from a search query. How should one model this? Of course the results likely relate to some model(s), but they are not meant to be tied to said model(s).
Part of me thinks that I should use a collection using a model that doesn't actually sync with a data store through the server, but instead just exists as a means of a modeling a search result object.
Another solution could be to have a collection with no models and just override parse.
I assume that the former is preferred, but again I have no experience with the framework. If there's an alternative/better solution than those listed above, please advise.
I prefer having one object which is responsible for both request and response parsing. It can parse the response to appropriate models and nothing more. I mean - if some of those parsed models are required somewhere in your page, there is something that keeps reference to this wrapper object and takes models from response it requires via wrapper methods.
Another option is to have Radio (https://github.com/marionettejs/backbone.radio) in this wrapper - you will not have to keep wrapper object in different places but call for data via Radio.

SailsJS Models: Dynamic recentHistory attribute of a model, how to implement

I'm building a webapp and I have a model, lets call it "person" and "person" has an attribute "location." I want person to have another attribute, "recentHistory" that is an array with the last, say, 5 locations that the user has had. What is the best way to implement this? I've skimmed the docs and I'm not really sure. The frontend is in AngularJS if that matters (I don't think it should). Is the best implementation to use a beforeUpdate where if they are updating location, it adds it to that array (and pushes out the 3rd index of that array)?
You can either store the data as an array or use associations. If you want to manage the array then the beforeUpdate lifecycle callback will be perfect.

Clear all elements from Backbone Collection and remove them from the associated Lawnchair

I am using Backbone.js, Lawnchair, and backbone.lawnchair.js.
I was wondering what the correct way to "empty" a collection (both from the application and from localStorage) would be?
Currently, I am employing something along these lines:
Favorites.Collection = Backbone.Collection.extend({
model: Favorites.Model,
lawnchair: new Lawnchair({ name: "favorites" }, function(e){}),
empty: function(){
this.lawnchair.nuke();
this.fetch();
}
});
This is essentially destroying the elements in localStorage (lawnchair provides the nuke method) and fetching from localStorage. This seems a bit clunky, and I was wondering if I am thinking about this right - or if there is a better way.
Cheers!
Backbone follows sort of RESTish principles and sadly REST doesn't describe a bulk-delete procedure. Resources can only be deleted one at a time. Typically APIs don't support HTTP DELETE on the entire collection URI like DELETE /favorites, only DELETE /favorites/42 - one at a time. Thus there's no single backbone method that just does this as generally the collection is mostly designed to do fetch/GET and then delegate to the individual models to do saves and deletes. So yes, you're on your own for a bulk-delete approach. You can do something more RPC-like and pass a list of IDs to a delete procedure, but what you have above seems perfectly adequate to me, and yes deleting them all directly and then re-fetching the collection seems also very reasonable to me. So I think you're solution is fine as is, but I'm also curious to see what others suggest.

Self Tracking Entities Traffic Optimization

I'm working on a personal project using WPF with Entity Framework and Self Tracking Entities. I have a WCF web service which exposes some methods for the CRUD operations. Today I decided to do some tests and to see what actually travels over this service and even though I expected something like this, I got really disappointed. The problem is that for a simple update (or delete) operation for just one object - lets say Category I send to the server the whole object graph, including all of its parent categories, their items, child categories and their items, etc. I my case it was a 170 KB xml file on a really small database (2 main categories and about 20 total and about 60 items). I can't imagine what will happen if I have a really big database.
I tried to google for some articles concerning traffic optimization with STE, but with no success, so I decided to ask here if somebody has done something similar, knows some good practices, etc.
One of the possible ways I came out with is to get the data I need per object with more service calls:
return context.Categories.ToList();//only the categories
...
return context.Items.ToList();//only the items
Instead of:
return context.Categories.Include("Items").ToList();
This way the categories and the items will be separated and when making changes or deleting some objects the data sent over the wire will be less.
Has any of you faced a similar problem and how did you solve it or did you solve it?
We've encountered similiar challenges. First of all, as you already mentioned, is to keep the entities as small as possible (as dictated by the desired client functionality). And second, when sending entities back over the wire to be persisted: strip all navigation properties (nested objects) when they haven't changed. This sounds very simple but is not at all trivial. What we do is to recursively dig into the entities present in trackable collections of say the "topmost" entity (and their trackable collections, and theirs, and...) and remove them when their ChangeTracking state is "Unchanged". But be carefull with this, because in some cases you still need these entities because they have been removed or added to trackable collections of their parent entity (so then you shouldn't remove them).
This, what we call "StripEntity", is also mentioned (not with any code sample or whatsoever) in Julie Lerman's - Programming Entity Framework.
And although it might not be as efficient as a more purist kind of approach, the use of STE's saves a lot of code for queries against the database. We are not in need for optimal performance in a high traffic situation, so STE's suit our needs and takes away a lot of code to communicate with the database. You have to decide for your situation what the "best" solution is. Good luck!
You can find an Entity Framework project item at http://selftrackingentity.codeplex.com/. With version 0.9.8, I added a method called GetObjectGraphChanges() that returns an optimized entity object graph with only objects that have changes.
Also, there are two helper methods: EstimateObjectGraphSize() and EstimateObjectGraphChangeSize(). The first method returns the estimate size of the whole entity object along with its object graph; and the later returns the estimate size of the optimized entity object graph with only object that have changes. With these two helper methods, you can decide whether it makes sense to call GetObjectGraphChanges() or not.

WPF and Active Objects

I have a collection of "active objects". That is, objects that need to preiodically update themselves. In turn, these objects should be used to update a WPF-based GUI.
In the past I would just have each object include it's own thread, but that only makes sense when working with a finite number of objects with well-defined life-cycles. Now I'm using objects that only exist when needed by a form so the life cycle is unpredicable. Also, I can have dozens of objects all making database and web service calls.
Under normal circumstances the update interval is 1 second, but it can take up to 30 seconds due to timeouts.
So, what design would you recommend?
You may use one dispatcher (scheduler) for all or group of active objects. Dispatcher can process high priority tasks at the first place then other ones.
You can see this article about the long-running active objects with code to find out how to do it. In additional I recommend to look at Half Sync/ Half Async pattern.
If you have questions - welcome.
I am not an expert, but I would just have the objects fire an event indicating when they've changed. The GUI can then refresh the necessary parts of itself (easy when using data binding and INotifyPropertyChanged) whenever it receives an event.
I'd probably try to generalize out some sort of data bus, if possible, and when objects are 'active' have them add themselves to a list of objects to be updated. I'd especially be tempted to use this pattern if the objects are backed by a database, as that way you can aggregate multiple queries, instead of having to do a single query per each object.
If there end up being no listeners for a specific object, no big deal, the data just goes nowhere.
The core updater code can then use a single timer (or multiple, or whatever is appropriate) to determine when to get updates. Doing this as more of a dataflow, and less of a 'state update' will probably save a lot of sanity in the end.

Resources