The best way for extending a model - backbone.js

According the Backbone.js documentation Model-parse does the following:
parse is called whenever a model's data is returned by the server, in
fetch, and save.
To augment models I've already loaded I use Model.parse(). I accomplish this by using fetch to make an additional request for data, then use that data to add properties to an existing model.
Example:
the fetch object is {age: 19}
after the parser will be {age: 19, isAdult: true}
When I perform the save request, in the PUT request I also have other parameters not needed (for example isAdult). I would like to have the original model (without additional parameters in PUT request).
What is the best way to achieve my goal in Backbone?

If I understand your question correctly ....
When backbone talks to a server using a save it sends a complete respresentation of the model. As the docs put it :
The attributes hash (as in set) should contain the attributes you'd
like to change — keys that aren't mentioned won't be altered — but, a
complete representation of the resource will be sent to the server.
So the default behavior is to send the complete model. If you want to implement you're own logic you're going to have to override the sync method. Dig through the expanded backbone code a bit and you'll see this comment above sync :
// Override this function to change the manner in which Backbone persists
// models to the server. You will be passed the type of request, and the model in question.
I would use the default implementation of sync as my starting point.

Related

What's the preferred way to go about using backbone with non-crud resources?

New to backbone/marionette, but I believe that I understand how to use backbone when dealing with CRUD/REST; however, consider something like results from a search query. How should one model this? Of course the results likely relate to some model(s), but they are not meant to be tied to said model(s).
Part of me thinks that I should use a collection using a model that doesn't actually sync with a data store through the server, but instead just exists as a means of a modeling a search result object.
Another solution could be to have a collection with no models and just override parse.
I assume that the former is preferred, but again I have no experience with the framework. If there's an alternative/better solution than those listed above, please advise.
I prefer having one object which is responsible for both request and response parsing. It can parse the response to appropriate models and nothing more. I mean - if some of those parsed models are required somewhere in your page, there is something that keeps reference to this wrapper object and takes models from response it requires via wrapper methods.
Another option is to have Radio (https://github.com/marionettejs/backbone.radio) in this wrapper - you will not have to keep wrapper object in different places but call for data via Radio.

Can/should a backbone model store (temporary) data that's not on the server?

With Backbone, I do a somewhat expensive calculation for each Model in my Collection and there can be a lot of Models. I'm thinking I'd like to store the result in each Model with set(), but I don't want to save it to the server. Is this generally a bad idea?
If that's not good, is the better practice to keep it in an array variable or a Model (a calculations results model separate from the cached server data model)?
Why do I think this might be a good idea?
I wouldn't ever have to give thought to the array variable's scope/context.
No looking up the array contents once I have the relevant Model.
Data is more encapsulated
Why do I think this might be a bad idea?
Mixes cached server data with calculated local data.
Probably have to write sync code so that save() only saves the attributes the server should get.
Thanks!
EDIT
Found someone exploring a similar issue, with good discussion: Custom Model Property in Template.
This seems to have a pretty thorough answer that I am exploring: Backbone Computed Properties.
One solutions might be to override the toJSON function of your Model.
This function is called by the save function to get the attributes to be send back to the server.
Looking at the docs of the toJSON function is basically is saying you could use it for your specific purpose:
Return a copy of the model's attributes for JSON stringification. This can be used for
persistence, serialization, or for augmentation before being sent to the server.
I would personally not consider it as bad practice but all depends on the amount of and the calculations itself that is needed. So it would depend on your specific use case.
Also you could not store the calculated object in your model.attributes object but somewhere in your model instance. That way it would be hidden from the model attributes you will synchronize back and forth with your server.

Backbone: Many small requests for model changes vs one collection sync?

What is a general good practice for when some action - changes multiple models in Backbone.js:
Trigger multiple PUT requests for each mode.save()
Single request to sync the entire collection
In case if the quantity of the changed models greater than 1 - definitely it should be the second item.
Usually, good REST api practice seems to suggest that you should update, save, create, delete single instances of persistent elements. In fact, you will find that a Backbone.Collection object does not implement these methods.
Also, if you use a standard URI scheme for your data access point, you will notice that a collection does not have a unique id.
GET /models //to get all items,
GET /models/:id //to read an element,
PUT /models/:id //to update an element,
POST /models/:id //to create an element,
DELETE /models/:id //to delete an element.
If you need to update every model of a collection on the server at once, maybe you need to ask why and there might be some re-thinking of the model structure. Maybe there should be a separate model holding that common information.
As suggested by Bart, you could implement a PATCH method to update only changed attributes of a particular element, thus saving bandwidth.
I like the first option, but I'd recommend you implement a PATCH behavior (only send updated attributes) to keep the requests as small as possible. This method gives you a more native "auto-save" feel like Google Docs. Of course, this all depends on your app and what you are doing.

Dereferencing objects with angularJS' $resource

I'm new to AngularJS and I am currently building a webapp using a Django/Tastypie API. This webapp works with posts and an API call (GET) looks like :
{
title: "Bootstrap: wider input field - Stack Overflow",
link: "http://stackoverflow.com/questions/978...",
author: "/v1/users/1/",
resource_uri: "/v1/posts/18/",
}
To request these objects, I created an angular's service which embed resources declared like the following one :
Post: $resource(ConfigurationService.API_URL_RESOURCE + '/v1/posts/:id/')
Everything works like a charm but I would like to solve two problems :
How to properly replace the author field by its value ? In other word, how the request as automatically as possible every reference field ?
How to cache this value to avoid several calls on the same endpoint ?
Again, I'm new to angularJS and I might missed something in my comprehension of the $resource object.
Thanks,
Maxime.
With regard to question one, I know of no trivial, out-of-the-box solution. I suppose you could use custom response transformers to launch subsidiary $resource requests, attaching promises from those requests as properties of the response object (in place of the URLs). Recent versions of the $resource factory allow you to specify such a transformer for $resource instances. You would delegate to the global default response transformer ($httpProvider.defaults.transformResponse) to get your actual JSON, then substitute properties and launch subsidiary requests from there. And remember, when delegating this way, to pass along the first TWO, not ONE, parameters your own response transformer receives when it is called, even though the documentation mentions only one (I learned this the hard way).
There's no way to know when every last promise has been fulfilled, but I presume you won't have any code that will depend on this knowledge (as it is common for your UI to just be bound to bits and pieces of the model object, itself a promise, returned by the original HTTP request).
As for question two, I'm not sure whether you're referring to your main object (in which case $scope should suffice as a means of retaining a reference) or these subsidiary objects that you propose to download as a means of assembling an aggregate on the client side. Presuming the latter, I guess you could do something like maintaining a hash relating URLs to objects in your $scope, say, and have the success functions on your subsidiary $resource requests update this dictionary. Then you could make the response transformer I described above check the hash first to see if it's already got the resource instance desired, getting the $resource from the back end only when such a local copy is absent.
This is all a bunch of work (and round trips) to assemble resources on the client side when it might be much easier just to assemble your aggregate in your application layer and serve it up pre-cooked. REST sets no expectations for data normalization.

extend Backbone.sync to batch sync?

What technique shall one use to implement batch insert/update for Backbone.sync?
I guess it depends on your usage scenarios, and how much you want to change the calling code. I think you have two options:
Option 1: No Change to client (calling) code
Oddly enough the annotated source for Backbone.sync gives 'batching' as a possible reason for overriding the sync method:
Use setTimeout to batch rapid-fire updates into a single request.
Instead of actually saving on sync, add the request to a queue, and only batch-save every so often. _.throttle or _.delay might help you here.
Option 2: Change client code
Alternatively, instead of calling save on your models, you could add some sort of save method to collections. You'd have to track which models were actually modified and hence in need of update, since as far as I can tell, Backbone only knows whether they're new or not (but I could be wrong about that).
Here's how I did it
Backbone.originalSync = Backbone.sync;
Backbone.sync = function (method, model, options) {
//
// code to extend sync
//
// calling original sync
Backbone.originalSync(method, model, options);
}
works fine for me , and I use it to control every ajax request coming out of any model or collection

Resources