angularjs - can promises timeout? - angularjs

When using Angular's $q service, do promises eventually timeout on their own?
Some background: I have an Angular service making POSTs to a remote server, which in turn is querying a MySQL database and supposed to send back the result. Some queries return in under a second, while other are expected to take up to 20 minutes. The issue is that, after exactly 4 minutes after sending the request, there's a net::ERR_EMPTY_RESPONSE (in Chrome dev tools).
We're using $q and $http in a service to facilitate the POSTs.
queryDB: function(query, page) {
var deferred = $q.defer();
$http.post(BASEPATH + "/filter", {data: query, page: page, token: localStorage.auth}, {timeout: 1200000})
.success(function(response) {
deferred.resolve(response);
})
.error(function(error) {
deferred.reject(error);
})
return deferred.promise;
}
Here's the controller to which the promise returns:
$scope.queryDB = function(page) {
var query = formatQuery($scope.query);
FilterService.queryDB(query, page).then(function(res) {
...
})
I know the call is arriving at the server thanks to some simple test logging. but what's odd to me is the dev console indicates that the connection was always "stalled" and never reached the server in the first place.
Again, this only happens with queries taking more than 4 minutes, otherwise it behaves normally. So my best guess is that Angular is not waiting long enough to resolve/reject the promise. Is there any way to change this behavior?

Related

How to stop double request to same url

Angularjs app here.
There are 2 controllers that do similar things.
In particular, they have an interval. Each 10 seconds they go to their own service.
These 2 different services also do similar things. Most important is that they go to an URL that looks like this:
example.com/fromTimestamp=2019-11-21T15:13:51.618Z
As the two controllers start more or less at the same time, in the example above they could generate something like:
controller/service 1: example.com/fromTimestamp=2019-11-21T15:13:51.618Z
controller/service 2: example.com/fromTimestamp=2019-11-21T15:13:52.898Z
This is because the parameter is created in the service with his line:
var timestamp = fromTimestamp ? '&fromTimestamp=' +fromTimestamp.toISOString() : '';
So maybe there will be a difference of some seconds. Or even only a difference of milliseconds.
I would like to make only one request, and share the data fetched from http between the two services.
The most natural approach would seem to be using cache.
What I could understand is that this call could make the trick:
return $http.get('example.com/fromTimestamp=2019-11-21T15:13:51.618Z', {cache: true});
But looking in the dev tools it is still making 2 requests to the server. I guess this is because they have 2 different urls?
If that is the problem, what could be another approach to this problem?
In my apps, when face with this problem, I use the $q provider and a promise object to suspend all calls to the same endpoint while a singleton promise is unresolved.
So, if the app makes two calls very close together, the second call will not be attempted until the promise created by the first call is resolved. Further, I check the parameters of the calls, and if they are the same, then the original promise object is returned for both requests. In your case, your parameters are always different because of the time stamp. In that case, you could compare the difference in time between the two calls, and if it is under a certain threshold in miliseconds, you can just return that first promise object. Something like this:
var promiseKeeper; //singleton variable in service
function(endpointName, dataAsJson) {
if (
angular.isDefined(promiseKeeper[endpointName].promise) &&
/*promiseKeeper[endpointName].dataAsJson == dataAsJson && */
lastRequestTime - currentRequestTime < 500
) {
return promiseKeeper[endpointName].promise;
} else {
deferred = $q.defer();
postRequest = $http.post(apiUrl, payload);
postRequest
.then(function(response) {
promiseKeeper[endpointName] = {};
if (params.special) {
deferred.resolve(response);
} else {
deferred.resolve(response.data.result);
}
})
.catch(function(errorResponse) {
promiseKeeper[endpointName] = {};
console.error("Error making API request");
deferred.reject(extractError(errorResponse));
});
promiseKeeper[endpointName].promise = deferred.promise;
promiseKeeper[endpointName].dataAsJson = dataAsJson;
return deferred.promise;
}
}

Hold requests using injector Angularjs

I would like to know if there is a way to hold requests in order to execute them one by one instead of all together?
Request 1
Request 2
Request 3
If i receive them almost together the idea is to execute Request 1 and after this request is complete then execute the others.
Dont forget i would like to do it using injectors.
Thanks in advance!!
You can hold requests using AngularJS Interceptors, by returning a promise in request method:
authInterceptor.$inject = ['$q', '$rootScope'];
function authInterceptor($q, $rootScope) {
return {
request: function(config) {
var deferred = $q.defer();
if(condition) {
deferred.resolve(config)
}
return deferred.promise;
}
};
}
In this example the requests will hold until meeting some condition.
What you can do is to keep a handle to every request (the deferred variable in above example), and send them by resolving each one in your desired order.

$q.all slower than sequential .then()?

I have some angular code that calls two separate backend services via $http.get. The backend is ASP.NET MVC 5.
I call the services via $http.get, and since I need a response from both services before continuing, I wrap the returned promises in $q.all. However, there seems be a massive overhead when resolving the promises via $q.all compared to resolving the promises sequentially (i.e. call the second service in the .then callback of the first promise).
The overhead appears in the TTFB (Time to first byte).
I can't figure out why $q.all would be slower than sequentially waiting for one promise to resolve before starting the next. In fact, I thought $q.all would be faster since it would allow me to start the second service call before the first has resolved.
Read on for implementation details.
These backend services are fairly lightweight:
ProductsController:
[HttpGet]
public Dictionary<string, PriceListTypeDto> GetPriceListTypesForProducts([FromUri] List<string> productErpIds)
{
// Work to get PriceListTypes. Work takes 40 ms on avg.
}
UserController:
[HttpGet]
public int? GetUserOrganizationId()
{
// work to get orgId. 1-10 ms runtime on avg.
}
Javascript functions that call these services:
var addPriceListTypes = function (replacementInfoObjects, productErpIds) {
return productService.getPriceListTypesForProducts(productErpIds) // Returns promise from $http.get
.then(function (response) {
// Simple work, takes 1 ms.
})
.catch(function () {
});
}
var addOrganizationSpecificDetails = function (replacementInfoObjects) {
return userContextService.getUserOrganizationId() // Returns promise from $http.get
.then(function (response) {
// Simple work, takes 1 ms.
})
.catch(function () {
});
};
Handling the promises:
Option 1: Takes ~600 ms before $q.all.then is called.
mapping-service.js:
var deferredResult = $q.defer();
var orgDetailsPromise = addOrganizationSpecificDetails(productInfoObjects);
var priceListPromise = addPriceListTypes(products, productErpIds);
$q.all([orgDetailsPromise, priceListPromise])
.then(function () {
deferredResult.resolve(productInfoObjects);
}).catch(function () {
deferredResult.reject();
});
return deferredResult.promise;
Performance via Chrome devtools:
Option 2: Takes ~250 ms before both promises are resolved:
mapping-service.js:
var deferredResult = $q.defer();
addOrganizationSpecificDetails(productInfoObjects)
.then(function () {
addPriceListTypes(productInfoObjects, productErpIds)
.then(function () {
deferredResult.resolve(productInfoObjects);
})
.catch(function () {
deferredResult.reject();
});
})
.catch(function () {
deferredResult.reject();
});
return deferredResult.promise;
Performance via Chrome devtools:
Where does the overhead in option 1 come from? What have I missed? I'm completely stumped here. Please let me know if you need more information.
I had a very similar problem when building a custom screen for Microsoft CRM a while back. I was using $q.all() and realized that by hitting the server with multiple requests at the same time, some of them failed or took really long to get resolved. Eventually we did the same thing you did - chain the requests rather than firing them all at once.
I believe this might be a similar issue to the one we had. We I am trying to say is that I am not very surprised that this is the case. I am not sure what our problem was exactly (meaning what was causing it), but it was there and out of our hands (it was the online CRM, not hosted).
I know my answer does not really offer any solution, but I though it is an insight which might give you some peace of mind.

Don't execute a $resource request in angular when there is already a request running

I use a interval of 10 seconds for sending a request to get the most recent data:
var pollInterval = 10000;
var poll;
poll= $interval(function()
{
getNewestData();//$resource factory to get server data
}, pollInterval );
This works fine voor 99% of the time, but if the internet speed is really slow(I have actually experienced this), It will send the next request before the current is finished. Is there a way to just skip the current interval request if the previous one is still busy? Obsiously I could just use booleans to keep the state of the request, but I wonder if there is a better(native to angular) way of doing this?
Use the $resolved property of the Resource object to check if the previous operation is done.
From the Docs:
The Resource instances and collections have these additional properties:
$promise: the promise of the original server interaction that created this instance or collection.
$resolved: true after first server interaction is completed (either with success or rejection), false before that. Knowing if the Resource has been resolved is useful in data-binding.
$cancelRequest: If there is a cancellable, pending request related to the instance or collection, calling this method will abort the request.
-- AngularJS ngResource $resource API Reference.
How about making the request, then waiting for that to complete and then wait 10 seconds before making the same request again? Something along this line:
var pollInterval = 10000;
var getNewestData = function () {
// returns data in promise using $http, $resource or some other way
};
var getNewestDataContinuously = function () {
getNewestData().then(function (data) {
// do something with the data
$timeout(function () {
getNewestDataContinuously();
}, pollInterval);
});
};
getNewestData is the function that actually makes the request and returns the data in a promise.
And once data is fetched, a $timeout is started with timer as 10 seconds which then repeats the process.

How to use multiple $timeout.flush() and $httpBackend.flush() calls in a Jasmine spec for AngularJS (with PhantomJS) without getting $digest error

I am trying to write a unit test for a AngularJS service with Karma and Jasmine using the PhantomJS browser. The service that I want to test uses nested promises (for reading a IndexedDB database, getting responses from the server and writing the response back to the database).
The same has to be done when writing to the database (write local db, post/put to server, update local db with server id). The tests I am writing all work except for the tests for handling a loss of connection to the server.
When that happens the sync() method sets the online status to false and creates a $timeout that repeatedly checks for a connection by doing a recursive call. If the connection is up again, all unsynced data will be synced (and the promises of the corresponding sync() calls will be resolved).
describe('Harmonized', function() {
var ... // setting all 'global' variables
[...]
beforeEach(function(){
// Address that the service sends a ping to check if online again
pingGet = backend.when('GET', 'http://localhost:2403');
pingGet.respond(0);
testTablePut121 = backend.when('PUT', 'http://localhost:2403/test_table/121');
testTablePut121.respond(function(method, url, data, headers) { ... } // returns 200
[...]
}
[...]
it('should try to sync a SAVED element but should fail because of missing server connectivity', function() {
var list;
var timeoutCounter = 0;
_checkOnline = harmonized._checkOnline; // harmonized is the service that has to be testet
harmonized._checkOnline = function () {
// only call the original function two times
if (timeoutCounter++ < 2) {
_checkOnline();
// flush the $timeout and the $httpBackend
timeout.flush(2000);
backend.flush(1);
// this also gives me an error because of the $digest already in progress
console.log('after tick');
}
};
all.getList().then(function(entries) {
list = entries;
testTablePut121.respond(0);
pingGet = backend.expect('GET', 'http://localhost:2403');
pingGet.respond(0);
spyOn(list[0], 'sync').and.callThrough();
list[0].save().then(
// resolve is called when the server call was successfull
function() {
expect(list[0]._synchronized).toBeTruthy();
expect(list[0].sync.calls.count()).toBe(2);
console.log('done');
},
// reject is called when the server response an error other than 0 (which is not the case in this test spec)
null,
// notify is called when the local db save is made
function(){
expect(list[0].sync.calls.count()).toBe(1);
expect(list[0]._synchronized).toBeFalsy();
});
backend.flush(1);
});
backend.flush(1);
});
When the backend.flush() function is called a second time (to flush the http request made in the save() function, I get an error telling me that the $digest is already in progress. Omitting the line, doesn't flush the request (and a also get an error because of that).
Is there a way to flush everything at the right time without getting the $digest error?

Resources