I'm trying to find a generic solution to commit/rollback a model alongside ng-resource.
When to commit/save model:
Successful http write methods (PUT, POST, PATCH).
When to rollback/reset model:
Failing http write methods.
User deciding to cancel their local changes (before PUT).
I've found pieces of this solution scattered, but no comprehensive strategy.
My initial strategy was to make use of the default ng-resource cache. Controller starts with GETting an item. If a subsequent PUT/POST/PATCH failed (caught either in the controller's promise, or in the $resource.<write_method>'s interceptor.responseError, re-run the GET that first got the object, and replace my model with that, as on initial load. However, this doesn't help if there's a successful PUT, followed by a failed one, because I'd be replacing my model with a stale object. The same problem occurs when a user tries to cancel their form changes before submitting... the last successful GET may be stale.
I've got a model with ~10 properties, and multiple <form>s (for html/design's sake) that all interact with this one object that gets PUT upon any of the forms being submitted:
{'location_id': 1234,
'address': {
'line1': '123 Fake St'}
...
},
'phone': '707-123-4567',
...
}
Models already have nice rollback features, but getting ng-resource to touch them seems tricky, and I really want to reset the entire object/model. Using a custom cache alongside my $resource instance seems like a decent way to go, but is equally tricky to give user-interaction a way to rollback. Another idea was to store a separate object when loading into scope: $scope.location = respData; $scope._location = angular.copy(respData); that I could load from when wanting to roll back, by way of $scope.location = $scope._location;, but I really don't want to clutter my controllers site-wide with these shadow objects/functions (DRY).
Related
In my never ending quest to do things the "proper" angular way, I have been reading a lot about how to have controllers observe the changes in models held in angular services.
Some sites say using a $watch on a controller is categorically wrong:
DON'T use $watch in a controller. It's hard to test and completely unnecessary in almost every case. Use a method on the scope to update the value(s) the watch was changing instead.
Others seem fine with it as long as you clean up after yourself:
The $watch function itself returns a function which will unbind the $watch when called. So, when the $watch is no longer needed, we simply call the function returned by $watch.
There are SO questions and other reputable sites that seem to say right out that using a $watch in a controller is a great way to notice changes in an angular-service-maintained model.
The https://github.com/angular/angular.js/wiki/Best-Practices site, which I think we can give a bit more weight to, says outright that $scope.$watch should replace the need for events. However, for complex SPA's that are handling upwards of 100 models and REST endpoints, choosing to use $watch to avoid events with $broadcast/$emit could end up with lots of watches. On the other hand, if we don't use $watch, for non-trivial apps we end up tons of event spaghetti.
Is this a lose/lose situation? Is it a false choice between events and watches? I know you can use the 2-way binding for many situations, but sometimes you just need a way to listen for changes.
EDIT
Ilan Frumer's comment made me rethink what I'm asking, so perhaps instead of just asking whether it is subjectively good/bad to use a $watch in a controller, let me put the questions this way:
Which implementation is likely to create a performance bottleneck first? Having controllers listen for events (which had to have been broadcast/emitted), or setting up $watch-es in controllers. Remember, large-scale app.
Which implementation creates a maintenance headache first: $watch-es or events? Arguably there is a coupling (tight/loose) either way... event watchers need to know what to listen for, and $watch-es on external values (like MyDataService.getAccountNumber()) both need to know about things happening outside their $scope.
** EDIT over a year later **
Angular has changed / improved a lot since I asked this question, but I still get +1's on it, so I thought I would mention that in looking at the angular team's code, I see a pattern when it comes to watchers in controllers (or directives where there is a scope that gets destroyed):
$scope.$on('$destroy', $scope.$watch('scopeVariable', functionIWantToCall));
What this does it take what the $watch function returns - a function that can be called to deregister the watcher - and give that to the event handler for when the controller is destroyed. This automatically cleans up the watcher.
Whether watches in controllers are code smell or not, if you use them, I believe the angular team's use of this pattern should serve as a strong recommendation for how to use them.
Thanks!
I use both, because honestly, I view them as different tools for different problems.
I'll give an example from an application that I built. I had a complex WebSocket Service that received dynamic data models from a web-socket server. The service itself doesn't care what the model looks like, but, of course, the controller sure does.
When the controller is initiated, it set up a $watch on the service data object so that it knew when it's particular data object had arrived (like waiting for Service.data.foo to exist. As soon as that model came into existence, it was able to bind to it and crate a two-way data-bind to it, the watch became obsolete, and it was destroyed.
On the other side, the service was responsible for broadcasting certain events as well, because sometimes the client would receive literal commands from the server. For instance, the Server might request that the client send some metadata that was stored in the '$rootScope' throughout the application. an .on() was set up in the $rootScope during module.run() step to listen for those commands from the server, gather needed information from other services, and call the WebSocket service back to send the data as requested. Alternatively, if I had done this using a $watch(), I would have needed to set up some sort of arbitrary variable for it to watch, like metadataRequests which I would need to increment every time I receive a request. A broadcast achieves the same thing without having to live in permanent memory, like our variable would.
Essentially, I use a $watch() when there is a specific value that I want to see change (especially if I need to know the before-and-after values), and I use events if there are more high-level conditions that have been met that the controllers need to know about.
With regards to performance, I couldn't tell you which one is going to bottleneck first, but I feel like thinking of it this way will let you use the strengths of each feature where they are strongest. For instance, if you use $on() instead of $watch() to look for changes in data, you will not have access to the values before and after the change, which could limit what you are trying to do.
All that two-way data-binding is, is a $watch on whatever scope property you give to ng-model, which has a controller that allows other directives like input, and form to sync the ng-model value to render the view on a change. Which is detected by their registration of events in the DOM.
In essence,ng-model's $watch compares the value in the model, to the value it has internally. The value it has internally is set by supporting directives (input,form etc).
IMHO The only "events" you should react to in an angular application are user created (ie DOM events). These are solved with directives on the DOM and ng-model linking to the ..model
Also naturally there is async, for which angular provides $q for which the callbacks invoke a $digest.
As for performance, it says it really well in the angular docs. Its run on every $digest. So make it fast. Whats every $digest? Angular traverses all of your active scopes. Each scope has watches. which it executes. and performs comparisons in those. If there are diffs, it will run again. (the next loop around) Its not that simple because its optimized but all of your "angular code" executes in this $digest loop. A lot of directives might invoke a digest with scope.$apply(...) That will cause watches of whatever value they changed to notice and do their thing.
So your original question. Is it an anti-pattern? Absolutely not if you know how to use it. Though I'd just use ng-model. Just because it has had 1.2.10+ iterations of pretty smart people working on it... All of the other 'reactive-ness' of your app can be handled by $q, $timeout and the like.
I think they all have their proper place and, for me, it would be difficult to say stop using one for the others.
Data binding should always be used to keep your data model in sync with changes from the view. I think we can all agree on that.
I think using a watch on a controller to trigger some action based on a data change is useful. Like watching a complex data model to calculate a running total for an invoice. Or watching a model to trigger it as dirty.
I have used broadcast/emit/on when sending a message or an indication of some change from one scope to another that may be several layers away. I have created a custom directive where a broadcast event has been used as a hook to take some action in a controller.
What is a general good practice for when some action - changes multiple models in Backbone.js:
Trigger multiple PUT requests for each mode.save()
Single request to sync the entire collection
In case if the quantity of the changed models greater than 1 - definitely it should be the second item.
Usually, good REST api practice seems to suggest that you should update, save, create, delete single instances of persistent elements. In fact, you will find that a Backbone.Collection object does not implement these methods.
Also, if you use a standard URI scheme for your data access point, you will notice that a collection does not have a unique id.
GET /models //to get all items,
GET /models/:id //to read an element,
PUT /models/:id //to update an element,
POST /models/:id //to create an element,
DELETE /models/:id //to delete an element.
If you need to update every model of a collection on the server at once, maybe you need to ask why and there might be some re-thinking of the model structure. Maybe there should be a separate model holding that common information.
As suggested by Bart, you could implement a PATCH method to update only changed attributes of a particular element, thus saving bandwidth.
I like the first option, but I'd recommend you implement a PATCH behavior (only send updated attributes) to keep the requests as small as possible. This method gives you a more native "auto-save" feel like Google Docs. Of course, this all depends on your app and what you are doing.
I'm new to AngularJS and I am currently building a webapp using a Django/Tastypie API. This webapp works with posts and an API call (GET) looks like :
{
title: "Bootstrap: wider input field - Stack Overflow",
link: "http://stackoverflow.com/questions/978...",
author: "/v1/users/1/",
resource_uri: "/v1/posts/18/",
}
To request these objects, I created an angular's service which embed resources declared like the following one :
Post: $resource(ConfigurationService.API_URL_RESOURCE + '/v1/posts/:id/')
Everything works like a charm but I would like to solve two problems :
How to properly replace the author field by its value ? In other word, how the request as automatically as possible every reference field ?
How to cache this value to avoid several calls on the same endpoint ?
Again, I'm new to angularJS and I might missed something in my comprehension of the $resource object.
Thanks,
Maxime.
With regard to question one, I know of no trivial, out-of-the-box solution. I suppose you could use custom response transformers to launch subsidiary $resource requests, attaching promises from those requests as properties of the response object (in place of the URLs). Recent versions of the $resource factory allow you to specify such a transformer for $resource instances. You would delegate to the global default response transformer ($httpProvider.defaults.transformResponse) to get your actual JSON, then substitute properties and launch subsidiary requests from there. And remember, when delegating this way, to pass along the first TWO, not ONE, parameters your own response transformer receives when it is called, even though the documentation mentions only one (I learned this the hard way).
There's no way to know when every last promise has been fulfilled, but I presume you won't have any code that will depend on this knowledge (as it is common for your UI to just be bound to bits and pieces of the model object, itself a promise, returned by the original HTTP request).
As for question two, I'm not sure whether you're referring to your main object (in which case $scope should suffice as a means of retaining a reference) or these subsidiary objects that you propose to download as a means of assembling an aggregate on the client side. Presuming the latter, I guess you could do something like maintaining a hash relating URLs to objects in your $scope, say, and have the success functions on your subsidiary $resource requests update this dictionary. Then you could make the response transformer I described above check the hash first to see if it's already got the resource instance desired, getting the $resource from the back end only when such a local copy is absent.
This is all a bunch of work (and round trips) to assemble resources on the client side when it might be much easier just to assemble your aggregate in your application layer and serve it up pre-cooked. REST sets no expectations for data normalization.
What technique shall one use to implement batch insert/update for Backbone.sync?
I guess it depends on your usage scenarios, and how much you want to change the calling code. I think you have two options:
Option 1: No Change to client (calling) code
Oddly enough the annotated source for Backbone.sync gives 'batching' as a possible reason for overriding the sync method:
Use setTimeout to batch rapid-fire updates into a single request.
Instead of actually saving on sync, add the request to a queue, and only batch-save every so often. _.throttle or _.delay might help you here.
Option 2: Change client code
Alternatively, instead of calling save on your models, you could add some sort of save method to collections. You'd have to track which models were actually modified and hence in need of update, since as far as I can tell, Backbone only knows whether they're new or not (but I could be wrong about that).
Here's how I did it
Backbone.originalSync = Backbone.sync;
Backbone.sync = function (method, model, options) {
//
// code to extend sync
//
// calling original sync
Backbone.originalSync(method, model, options);
}
works fine for me , and I use it to control every ajax request coming out of any model or collection
Caveat: I'm working with a backend that I don't have full control over, so I'm wrestling with a few considerations within Backbone that might be better addressed elsewhere...unfortunately, I have no choice but to handle them here!
So, my problem is that I'd really like to validate user input from a form (when I set values with it on Backbone models), but the models I receive from the API on newly created objects (via posts that ONLY accept a name, and ONLY return a name and object id) will fail my validation checks.
As example, when a new object is created in the database, two key fields are populated as empty strings (and so when Backbone hits the API and populates the models, it populates those keys with empty strings). When the user saves these objects back, post-edit, I'd like to force them to enter values for these two keys -- which is very easy, given Backbone's built in validation method.
The problem, of course, is that the validation is firing on both fetch->set (unwanted behavior) and set->save (desired behavior) -- and so newly created models won't load at all...Backbone collects them, validation fails, and errors fire.
So, my question is: is there a "Backbone-y" way to only validate the models on set->save, not on fetch->set? Could I use a specific trigger to work around this?
Any ideas would be greatly appreciated.
Backbone.Model.set won't perform validation if you pass in { silent: true }, and fetch will pass any options through to set, so you could either override fetch or write your own fetchSilent method that passes that in an options object.
However, you might run into a slight gotcha with Backbone.Collection.fetch, because when it receives attributes from the server, it doesn't create the new models with set. Instead, it creates a new model with model = new this.model(attrs, {collection: this}); and then performs validation if there's a validate method on the object.
This is a little annoying. You can get around it by defining a parse method on your collection (if you're using one) that creates a model silently (using {silent: true}), because when Backbone.Collection.add receives a fully formed Backbone model, it won't run the validation. (see the _add and _prepareModel methods in the annotated source).
It's a little annoying that the collection works that way, but (for now at least) it is what it is.
Instead of overriding fetch you can do another thing:
When you validate your model, check for model.silent and only validate if that doesn't exist.
So you do the following when you want to fetch a model:
var test = new MyModel({ id: '123', silent: true });
// in your Model validate function
validate: function(attrs) {
if (!attrs.silent) {
// validate logic here
}
}
Then you can fetch the model. After you get your model you can unset silent, for future validation.