I have read every article on $q out there but somehow I am unable to grasp it. For example take the best article I found. I get as far as:
var myFirstDeferred = $q.defer(); //Actually I don't even get this far
A deferred represents the result of an asynchronic operation. It exposes an interface that can be used for signaling the state and the result of the operation it represents. It also provides a way to get the associated promise instance.
What's a 'deferred'? How is it different from a promise? Then we get to this:
async(function(value) {
myFirstDeferred.resolve(value);
}, function(errorReason) {
myFirstDeferred.reject(errorReason);
});
I have NO idea what this is doing. I want to stress that I understand async. I understand the promise structure. For example I know exactly what the code below is doing:
$http.get(url)
.then(function(result){returnresult}
,function(error){return error})
But what are we doing with the deferred in the object above? Why even have the deferred at all? Why not just the then block?
Edit: I want to stress that I went through a ton of replies here, as well as the articles. I thought the point of $q was the force the execution of an async call (kind of like "await" in c#) and then perform some code after. I really don't understand how it does that. I do get how the .all command works when waiting for multiple async operations in this example, but not with one.
Edit: This edit is in response to the duplicate suggestion - I disagree with the notion. For one, this question is more focused and limited in scope. In addition, the answer accepted here clarifies far better (imo) than the wide-net answer in the other q.
Typically you don't have to use a deferred explicitly. If you find yourself using $q.defer you should question why you are not using the promise interface directly. See the bluebird documentation for a description of why. $http and many other libraries use deferreds internally and return promises from the methods they expose so using deferreds on top of this is unnecessary.
That being said, deferreds can be useful and understanding them is important. Typically, deferreds have two methods: .reject and .resolve. Honestly there could be one method that you could use to mark the result of an asynchronous operation:
.reject -> the asynchronous operation failed or could not complete
.resolve -> the asynchronous operation completed successfully
When a deferred is completed, it triggers the promise callbacks.
You need to use $q.defer or deferreds when dealing with operations that are asynchronous but do not have a built in promise interface such as timers:
var dfd = $q.defer();
$timeout(function () {
// this is asynchronous
// it completed successfully
dfd.resolve("some value");
}, 500);
dfd.promise.then(function (value) {
assert.equal(value, "some value");
});
Rather than using deferreds as an object interface, Angular allows you to use these destructured as function arguments to $q
Rewriting the above:
var promise = $q(function (resolve) {
$timeout(function () {
resolve("some value");
}, 500);
});
promise.then(function (value) {
assert.equal(value, "some value");
});
A deferred object is just a subset of a promise that only gives you .reject and .resolve, which prevents the outside world from doing anyting else with it. If you can't return a promise (like when using a library built on callbacks, not promises), make a deferred and manually reject or resolve it.
deferred.reject and deferred.resolve only matter when you return the deferred. Basically:
somePromise().then( function() {
var deferred = q.defered();
someFunctionThatHasCallback( function() {
deferred.resolve();
});
return defererd;
}).then( function() {
console.log( 'I wait for deferred.resolve' );
});
If your functions return promises, you don't need deferred, because promises are chainable by returning inside .then
somePromise().then( function() {
return someFunctionThatReturnsPromise();
}).then( function() {
console.log( 'I wait for promise to resolve' );
});
Related
I am struggling with chaining promises using $timeouts. I would like to have a "$timeout(myFunction,1000).then()" function that fires only when ALL chained timeouts returned by myFunction are resolved.
This code snippet contains different stuff I tried and I would like to achieve:
$timeout(myFunction,1000).then(function(myPromise) {
console.log("I would like this message to appear when ALL chained promises are resolved, without knowing in advance how many chained promises there are. In this case this would be after 1500 ms, not 1000ms")
myPromise.then(function()) {
console.log("with this code patern I get a message after 1500ms, which is what I want, but this does not work anymore if myOtherFunction would return a third chained $timeout")
}
})
myFunction = function() {
console.log("hi, i am launching another timeout")
return $timeout(myOtherFunction, 500)
}
myOtherFunction = function () {
console.log("1500 ms have passed")
}
How should I fix my code? Thanks!
Return promises to the success handler:
$timeout(null,1000).then(function() {
console.log("It is 1000ms");
var delay = 500
return myPromise(delay);
// ^^^^^^ return promise for chaining
}).then(function() {
console.log("this happens after myPromise resolves");
});
function myPromise(delay) {
promise = $timeout(null, delay);
return promise;
});
Because calling the .then method of a promise returns a new derived promise, it is easily possible to create a chain of promises. It is possible to create chains of any length and since a promise can be resolved with another promise (which will defer its resolution further), it is possible to pause/defer resolution of the promises at any point in the chain. This makes it possible to implement powerful APIs.
-- AngularJS $q Service API Reference -- Chaining promises;
Inspired by the answer of georgeawg I created my custom timeout function that returns the promise returned by fct, instead of the promise returned by $timeout. I did this to keep the $timeout syntax.
vm.customTimeout = function (fct, timeout){
return $timeout(fct, timeout).then(function(myReturnedPromise){
return myReturnedPromise
});
}
This function is sufficient to solve my problem above. I can chain as much customTimeouts I want.
Example :
vm.customTimeout(myFunction,1000).then(function() {
var activity1 = anyFunctionReturningAPromise(100);
var activity2 = anyFunctionReturningAPromise(1000);
return $q.all([activity1, activity2])
console.log("Without knowing the content of myFunction, I am 100% sure that
every single chained promise retuned by myFunction is resolved before
executing this code, which is quite nice!")
}).then(function(){
console.log("executes when customTimeout, activity1 & activity2 are all resolved.")
})
anyFunctionReturningAPromise = function(delay) {
return vm.customTimeout(myFunction, delay)
}
Feel free to comment what you think of it.
I hope this will be useful for someone else :)
I still can't understand the role of using the $q service, (what exactly will it add) if you want to create a service that need to call only one API via http , in this situation I don't know why shouldn't I just do the following (without using $q) :
this.getMovie = function(movie) {
return $http.get('/api/v1/movies/' + movie)
.then(
function(response) {
return {
title: response.data.title,
cost: response.data.price
});
},
function(httpError) {
// translate the error
throw httpError.status + " : " +
httpError.data;
});
};
Very good question and one very few people appreciate the answer to.
Consider this:
this.getMovie = function(movie) {
return $http.get('/api/v1/movies/' + movie);
};
Great code but these rules apply:
$http will resolve for 2xx responses and will otherwise reject. sometimes we don't want this. we want to reject on resolution and resolve on rejection. This makes sense when you consider a HEAD request to check the existence of something.
A HEAD request to /book/fred returning 200 shows book fred exists. But if my function is testing whether book fred is unique, it is not and so we want to reject on a 200. this is where $q comes in. Now I can do this:
var defer = $q.defer();
$http.head('/book/fred').then(function(response) {
// 2xx response so reject because its not unique
defer.reject(response);
}).catch(function(failResponse) {
defer.resolve(failResponse);
});
return defer.promise;
$q gives me total control of when I reject AND when I resolve.
Also, $q allows me to reject or resolve with any value. So if I am only interested in some of the response I can resolve with just the bit I am interested in.
Finally, I can use $q to turn non-promise based code into a promise.
var a = 5;
var b = 10;
var defer = $q.defer();
var resolve(a+b);
return defer.promise;
Bosh, if I need a promise as my return value then I've got one.
This is also great when mocking for unit tests.
AngularJS services such as $http, $timeout, $resource, etc. use the $q service internally to generate their promises. With those services there generally is no need to inject the $q service. In fact if you see $q.defer being used with those services, you should be asking Is this a “Deferred Antipattern”?.
There are some methods of the $q service that are useful in certain circumstances.
The $q.all method is used to wait on several promises.
var promise1 = $http.get(url1);
var promise2 = $http.get(url2);
$q.all([promise1, promise2]).then( responseArray ) {
$scope.data1 = responseArray[0].data;
$scope.data2 = responseArray[1].data;
}).catch( function ( firstError ) {
console.log(firstError.status)
});
The .catch method can be used to convert a rejected promise to a fulfilled promise. And vice versa with the .then method. No need to use $q.defer for that. For more information, see Angular execution order with $q.
The $q.when method is useful for generating a promise from unknown sources.
var promise = $q.when(ambiguousAPI(arg1));
The $q.when method creates a $q promise in all cases whether ambiguousAPI returns a value, a $q promise, or a promise from another library.
Because calling the .then method of a promise returns a new derived promise, it is easily possible to create a chain of promises. It is possible to create chains of any length and since a promise can be resolved with another promise (which will defer its resolution further), it is possible to pause/defer resolution of the promises at any point in the chain. This makes it possible to implement powerful APIs.1
To summarize: The $q service is used to create a promise, so when using service (like $http,$timeout,$resource,etc.) that already return promises you generally don't need to use the $q service.
In this case you certainly don't need it because $http.get itself returns a promise. But for example if you perform async call only on some condition it is useful
function acyncService () {
if (dataLoaded) return $q.resolve(data);
return $http.get('path/to/load/data');
}
In this case even if you do not perform async call, you still can use
acyncService().then(function(data){
console.log(data);
});
This is only one of many examples. It is also useful to use $q promises when you do async requests with other libs like AWS SDK for instance.
$http does an asynchronous call. So, ideally if you want to get the value of the response in the url, you should use a '$scope' variable to store it.
$http("your url").then(function(response){
//This is the success callback
$scope.movies = response.data;
});
Watching a lot of Egghead.io videos, I noticed that a common pattern is to return a custom promise and resolve it in the callbacks.
.factory('myFact', function($q, $http) {
return {
getData: function() {
var deferred = $q.defer();
$http.get('/path/to/api')
.success(function(data) {
deferred.resolve(data);
});
return deferred.promise;
}
};
});
I would normally write this as:
.factory('myFact', function($http) {
return {
getData: function() {
return $http.get('/path/to/api')
.then(function(res) {
return res.data;
});
}
};
});
Is there any advantage to returning a $q.defer() promise rather than an $http promise? The approaches look identical to me.
No, no advantages, it's the same, In your first code snipped you created a $q.defer() instance then you invoked its resolve() method to create a resolved promise.
That is the process you will need to know and pass throw in angularJs when working with asynchronously functions and future objects that will have different values or new data at some future moment which you will need to know when it happens because interested parties in your app may need to get access to the result of the deferred task when it completes.
Now when working with $http you don't have to do any of that because it will already return a resolved promise that you can directly invoke it's then() method unless you have a different way to do things and you need to implement a different approach.
But not all angularJs services are going to do the work for you, get a look to $resource for example, which wraps $http for use in RESTful web API scenarios. $resource will not return a resolved promise, a promise yes, you are getting one but you'll need to do the last step of resolving it (check this stack question or this and maybe this article about Amber Kaplan's own experience working with Rest).
So the way how you are doing it is good, that is how I'm doing it too when working with $http but the first snippet code is the one that we will be all searching for when we will need to do things differently with $http or forcing other services to 'work with' or 'work like' AJAX.
I'm writing a service that will retrieve data asynchronously ($http or $resource). I can hide the fact that it is asynchronous by returning an array that will initially be empty, but that will eventually get populated:
.factory('NewsfeedService1', ['$http', function($http) {
var posts = [];
var server_queried = false;
return {
posts: function() {
if(!server_queried) {
$http.get('json1.txt').success(
function(data) {
server_queried = true;
angular.copy(data, posts);
});
}
return posts;
}
};
}])
.controller('Ctrl1', ['$scope','NewsfeedService1',
function($scope, NewsfeedService1) {
$scope.posts = NewsfeedService1.posts();
}])
Or I can expose the asynchronicity by returning a promise:
.factory('NewsfeedService2', ['$http', function($http) {
var posts = [];
var server_queried = false;
var promise;
return {
posts_async: function() {
if(!promise || !server_queried) {
promise = $http.get('json2.txt').then(
function(response) {
server_queried = true;
posts = response.data;
return posts;
});
}
return promise;
}
};
}])
.controller('Ctrl2', ['$scope','NewsfeedService2',
function($scope, NewsfeedService2) {
NewsfeedService2.posts_async().then(
function(posts) {
$scope.posts = posts;
});
// or take advantage of the fact that $q promises are
// recognized by Angular's templating engine:
// (note that Peter and Pawel's AngularJS book recommends against this, p. 100)
$scope.posts2 = NewsfeedService2.posts_async();
}]);
(Plunker - if someone wants to play around with the above two implementations.)
One potential advantage of exposing the asychronicity would be that I can deal with errors in the controller by adding an error handler to the then() method. However, I'll likely be catching and dealing with $http errors in an application-wide interceptor.
So, when should a service's asynchronicity be exposed?
My guess is that you'll find people on both sides of this fence. Personally, I feel that you should always expose the asynchronicity of a library or function (or more correctly: I feel that you should never hide the asynchronicity of a library or function). The main reason is transparency; for example, will this work?
app.controller('MyController', function(NewsfeedService) {
$scope.posts = NewsfeedService.posts();
doSomethingWithPosts($scope.posts); // <-- will this work?
});
If you're using the first method (e.g. $resource), it won't, even though $scope.posts is technically an array. If doSomethingWithPosts has its own asynchronous operations, you could end up with a race condition. Instead, you have to use asynchronous code anyway:
app.controller('MyController', function(NewsfeedService) {
$scope.posts = NewsfeedService.posts(function() {
doSomethingWithPosts($scope.posts);
});
});
(Of course, you can make the callback accept the posts as an argument, but I still think it's confusing and non-standard.)
Luckily, we have promises, and the very purpose of a promise is to represent the future value of an operation. Furthermore, since promises created with Angular's $q libraries can be bound to views, there's nothing wrong with this:
app.controller('MyController', function(NewsfeedService) {
$scope.posts = NewsfeedService.posts();
// $scope.posts is a promise, but when it resolves
// the AngularJS view will work as intended.
});
[Update: you can no longer bind promises directly to the view; you must wait for the promise to be resolved and assign a scope property manually.]
As an aside, Restangular, a popular alternative to $resource, uses promises, and AngularJS' own $resource will be supporting them in 1.2 (they may already support them in the latest 1.1.x's).
I would always go with async option since i don't like hiding the async nature of the underlying framework.
The sync version may look more clean while consuming it, but it inadvertently leads to bug where the developer does not realize that the call is async in nature and tries to access data after making a call.
SO is filled with questions where people make this mistake with $resource considering it sync in nature, and expecting a response. $resource also takes similar approach to option 1, where results are filled after the call is complete, but still $resource exposes a success and failure function.
AngularJS tries to hide the complexities of async calls if promises are returned, so binding directly to a promise feels like one is doing a sync call.
I say no, because it makes it harder to work with multiple services built this way. With promises, you can use $q.all() to make multiple request and respond when all of them complete, or you can chain operations together by passing the promise around.
There would be no intuitive way to do this for the synchronous style service.
I saw some examples of Facebook Login services that were using promises to access FB Graph API.
Example #1:
this.api = function(item) {
var deferred = $q.defer();
if (item) {
facebook.FB.api('/' + item, function (result) {
$rootScope.$apply(function () {
if (angular.isUndefined(result.error)) {
deferred.resolve(result);
} else {
deferred.reject(result.error);
}
});
});
}
return deferred.promise;
}
And services that used "$scope.$digest() // Manual scope evaluation" when got the response
Example #2:
angular.module('HomePageModule', []).factory('facebookConnect', function() {
return new function() {
this.askFacebookForAuthentication = function(fail, success) {
FB.login(function(response) {
if (response.authResponse) {
FB.api('/me', success);
} else {
fail('User cancelled login or did not fully authorize.');
}
});
}
}
});
function ConnectCtrl(facebookConnect, $scope, $resource) {
$scope.user = {}
$scope.error = null;
$scope.registerWithFacebook = function() {
facebookConnect.askFacebookForAuthentication(
function(reason) { // fail
$scope.error = reason;
}, function(user) { // success
$scope.user = user
$scope.$digest() // Manual scope evaluation
});
}
}
JSFiddle
The questions are:
What is the difference in the examples above?
What are the reasons and cases to use $q service?
And how does it work?
This is not going to be a complete answer to your question, but hopefully this will help you and others when you try to read the documentation on the $q service. It took me a while to understand it.
Let's set aside AngularJS for a moment and just consider the Facebook API calls. Both the API calls use a callback mechanism to notify the caller when the response from Facebook is available:
facebook.FB.api('/' + item, function (result) {
if (result.error) {
// handle error
} else {
// handle success
}
});
// program continues while request is pending
...
This is a standard pattern for handling asynchronous operations in JavaScript and other languages.
One big problem with this pattern arises when you need to perform a sequence of asynchronous operations, where each successive operation depends on the result of the previous operation. That's what this code is doing:
FB.login(function(response) {
if (response.authResponse) {
FB.api('/me', success);
} else {
fail('User cancelled login or did not fully authorize.');
}
});
First it tries to log in, and then only after verifying that the login was successful does it make the request to the Graph API.
Even in this case, which is only chaining together two operations, things start to get messy. The method askFacebookForAuthentication accepts a callback for failure and success, but what happens when FB.login succeeds but FB.api fails? This method always invokes the success callback regardless of the result of the FB.api method.
Now imagine that you're trying to code a robust sequence of three or more asynchronous operations, in a way that properly handles errors at each step and will be legible to anyone else or even to you after a few weeks. Possible, but it's very easy to just keep nesting those callbacks and lose track of errors along the way.
Now, let's set aside the Facebook API for a moment and just consider the Angular Promises API, as implemented by the $q service. The pattern implemented by this service is an attempt to turn asynchronous programming back into something resembling a linear series of simple statements, with the ability to 'throw' an error at any step of the way and handle it at the end, semantically similar to the familiar try/catch block.
Consider this contrived example. Say we have two functions, where the second function consumes the result of the first one:
var firstFn = function(param) {
// do something with param
return 'firstResult';
};
var secondFn = function(param) {
// do something with param
return 'secondResult';
};
secondFn(firstFn());
Now imagine that firstFn and secondFn both take a long time to complete, so we want to process this sequence asynchronously. First we create a new deferred object, which represents a chain of operations:
var deferred = $q.defer();
var promise = deferred.promise;
The promise property represents the eventual result of the chain. If you log a promise immediately after creating it, you'll see that it is just an empty object ({}). Nothing to see yet, move right along.
So far our promise only represents the starting point in the chain. Now let's add our two operations:
promise = promise.then(firstFn).then(secondFn);
The then method adds a step to the chain and then returns a new promise representing the eventual result of the extended chain. You can add as many steps as you like.
So far, we have set up our chain of functions, but nothing has actually happened. You get things started by calling deferred.resolve, specifying the initial value you want to pass to the first actual step in the chain:
deferred.resolve('initial value');
And then...still nothing happens. To ensure that model changes are properly observed, Angular doesn't actually call the first step in the chain until the next time $apply is called:
deferred.resolve('initial value');
$rootScope.$apply();
// or
$rootScope.$apply(function() {
deferred.resolve('initial value');
});
So what about error handling? So far we have only specified a success handler at each step in the chain. then also accepts an error handler as an optional second argument. Here's another, longer example of a promise chain, this time with error handling:
var firstFn = function(param) {
// do something with param
if (param == 'bad value') {
return $q.reject('invalid value');
} else {
return 'firstResult';
}
};
var secondFn = function(param) {
// do something with param
if (param == 'bad value') {
return $q.reject('invalid value');
} else {
return 'secondResult';
}
};
var thirdFn = function(param) {
// do something with param
return 'thirdResult';
};
var errorFn = function(message) {
// handle error
};
var deferred = $q.defer();
var promise = deferred.promise.then(firstFn).then(secondFn).then(thirdFn, errorFn);
As you can see in this example, each handler in the chain has the opportunity to divert traffic to the next error handler instead of the next success handler. In most cases you can have a single error handler at the end of the chain, but you can also have intermediate error handlers that attempt recovery.
To quickly return to your examples (and your questions), I'll just say that they represent two different ways to adapt Facebook's callback-oriented API to Angular's way of observing model changes. The first example wraps the API call in a promise, which can be added to a scope and is understood by Angular's templating system. The second takes the more brute-force approach of setting the callback result directly on the scope, and then calling $scope.$digest() to make Angular aware of the change from an external source.
The two examples are not directly comparable, because the first is missing the login step. However, it's generally desirable to encapsulate interactions with external APIs like this in separate services, and deliver the results to controllers as promises. That way you can keep your controllers separate from external concerns, and test them more easily with mock services.
I expected a complex answer that will cover both: why they are used in
general and how to use it in Angular
This is the plunk for angular promises MVP (minimum viable promise): http://plnkr.co/edit/QBAB0usWXc96TnxqKhuA?p=preview
Source:
(for those too lazy to click on the links)
index.html
<head>
<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.1.5/angular.js"></script>
<script src="app.js"></script>
</head>
<body ng-app="myModule" ng-controller="HelloCtrl">
<h1>Messages</h1>
<ul>
<li ng-repeat="message in messages">{{ message }}</li>
</ul>
</body>
</html>
app.js
angular.module('myModule', [])
.factory('HelloWorld', function($q, $timeout) {
var getMessages = function() {
var deferred = $q.defer();
$timeout(function() {
deferred.resolve(['Hello', 'world']);
}, 2000);
return deferred.promise;
};
return {
getMessages: getMessages
};
})
.controller('HelloCtrl', function($scope, HelloWorld) {
$scope.messages = HelloWorld.getMessages();
});
(I know it doesn't solve your specific Facebook example but I find following snippets useful)
Via: http://markdalgleish.com/2013/06/using-promises-in-angularjs-views/
Update 28th Feb 2014: As of 1.2.0, promises are no longer resolved by templates.
http://www.benlesh.com/2013/02/angularjs-creating-service-with-http.html
(plunker example uses 1.1.5.)
A deferred represents the result of an asynchronic operation. It exposes an interface that can be used for signaling the state and the result of the operation it represents. It also provides a way to get the associated promise instance.
A promise provides an interface for interacting with it’s related deferred, and so, allows for interested parties to get access to the state and the result of the deferred operation.
When creating a deferred, it’s state is pending and it doesn’t have any result. When we resolve() or reject() the deferred, it changes it’s state to resolved or rejected. Still, we can get the associated promise immediately after creating a deferred and even assign interactions with it’s future result. Those interactions will occur only after the deferred rejected or resolved.
use promise within a controller and make sure the data is available or not
var app = angular.module("app",[]);
app.controller("test",function($scope,$q){
var deferred = $q.defer();
deferred.resolve("Hi");
deferred.promise.then(function(data){
console.log(data);
})
});
angular.bootstrap(document,["app"]);
<!DOCTYPE html>
<html>
<head>
<script data-require="angular.js#*" data-semver="1.3.0-beta.5" src="https://code.angularjs.org/1.3.0-beta.5/angular.js"></script>
</head>
<body>
<h1>Hello Angular</h1>
<div ng-controller="test">
</div>
</body>
</html>