In an Angular JS app I'm working on, I am using a service to periodically (in a $timeout) make a GET request to a URL in my API (both the Angular app and the API are being served from port 5000 on localhost).
For some reason, it appears that $http is not actually sending the GET. For each $http.get(), the .error() is called with empty data and a status of 0. When I check in my server log (I'm running a Ruby on Rails backend with the Unicorn gem for my server), it appears that the server never receives the request from Angular.
Here's the function in my service:
updateUserStatus = () ->
$http.get('/api/v1/status').success (statusData) ->
# update the variable and notify the observers
this.userStatus = statusData
notifyObservers()
startStatusTimeout()
.error (error, status) ->
# if there's an error, log it
console.log 'error:'
console.log error
console.log status
startStatusTimeout()
What's really odd is that it only happens sometimes. When it stops working, I can change the URL in the $http.get() to '/api/v1/status.json', and it works. For a while. Then I switch it back and it works again, for a while... obviously there is some greater issue at play.
I've been racking my brain for a few days now, and I've seen a bunch of similar issues on SO, but they all seem to be solved with implementing CORS in Angular, which I don't think is applicable to my situation because it's all coming from localhost:5000. Am I wrong? What's going on?
For reference, I'm using Angular version 1.0.7.
I had the same problem.
Check your code to see whether this happens after events that are fired from the DOM and are unknown to Angular.
If so, you need to add $scope.$apply(); after the get request in order to make it happen.
I'm fairly new to Angular so I'm not sure this is the best practice for using Angular, but it did work in my case.
See this similar question for a better explanation.
Related
I am developing an Angular JS application using ionic. For android, I am using crosswalk for better performance.
I've noticed that when running on Android, I am facing problems with http requests getting stuck when trying to load large images - if any request gets "stuck" -- i.e. no error, but in my chrome developer inspector, I see the http request as "pending" -- then all subsequent requests go into "pending" state too. This problem does not exist in iOS
The code is pretty simple:
<span ng-repeat="monitor in monitors">
<img ng-src="http://server.com/monitorId=monitor?view=jpg" />
</span>
This results in around 6 GETs of images of size 600x400 and the images keep changing (the server keeps changing the image)
What I've observed specifically with Android is after a few successful iterations, the network HTTP GET behind this img ng-src gets stuck in pending like I said above and then all subsequent HTTP requests also get into pending and none of them ever get out of that state.
I am guessing there is some sort of limit for network queue that is getting filled up.
So how do I solve this issue?
a) One way I could think of is to put a timeout -- but ng-src does not seem to have a time out function. My thought is on timeout, the http request would cancel - like in normal $http.get functions and this should help.
b) Maybe there is a way to flush all http requests. I saw in SO, someone created a new directive which needs to be added here: AngularJS abort all pending $http requests on route change --> but this needs me to replace http with this new directive --> while I am using img ng-src
c) Neither a nor c are ideal. I'd like to know what is really going on - why does Android balk at this while iOS does not (comparing Galaxy S3 with iPhone 5s). So if you have any other solutions, I'd love to hear them
thanks
Wow, this was quite a learning. I managed to implement a work-around.
Edited: For those who think this is due to the limitation of 6 connections- please go through https://code.google.com/p/chromium/issues/detail?id=234779
The problem specifically is Chrome (At least with crosswalk, and maybe chrome in general) has a problem if you open multiple streams of HTTP connections that don't close for a long time. In my case the "img-src" was pointing to an image URL that the server was changing 3 times a second. Each image takes a second or two to download, so data keeps streaming in.
There is something about this that puts Chrome in a tizzy and it starts getting into an eternal pending loop for any HTTP requests after the first pending - even unrelated HTTP requests
Fortunately, I managed to implement a workaround: The server had an option to just get one image (not dynamic). I used that URL and the implemented a $interval timer in that controller that would refresh that URL every second - effectively retrieving images every second (or any other timer value I want)
Chrome has NO problem dealing with the HTTP requests in this way because they are getting closed predictably, even if it means more HTTP requests.
Phew. Not the solution I'd want, but it works very well.
And the gallant iOS handles this well too (it handled the original scenario perfectly too)
I'm trying to use angulars $http, with both a cache and an interceptor.
The quick question:
Currently when angular gets the answer from the server, it first caches it, and then it passes it through the interceptor.
Is it possible to force angular to first pass it through the interceptor and only then cache it?
The long issue:
The server responds every call with a format similar to:
{permission: BOOL, data:[...]}
In the interceptor response I check the permission, and if correct I throw it away and pass only the data field to the application level. If it fails I reject and popup an error, etc... (Maybe I should do it in a transformResponse function, but I'll endup facing the same issue).
Then for some API calls I'm requesting a bunch of resources like that:
/resource/ALL
And it obviously caches this request and answer, but what I want to do next is fake caching every resource that I received.
So forthcoming calls to /resource/{some resource id} are already cached, cause I've already received it in the call where I requested ALL.
The problem I'm facing is, when I want to fake cache, on the application level, I lost the "{permission: BOOL" part, cause I've thrown it in the interceptor.
Some notes:
1- Of course I could also fake the permission part, and just hardcode it, but I feel it's not an option since if I later add / modify / remove metadata it's another place I've to look at.
2- An other way would be to don't throw the metadata on the interceptor / transformResponse, but again this is not an option since I don't want to select the 'data' field every time I call $http on the application level.
So I think the best option for me would be to cache after the interceptor, and not before.
Hope I made the issue clear, and any answer is welcome!
I am using angular 1.2
I have configured
$locationProvider.hashPrefix('!');
$locationProvider.html5Mode(false);
When using IE8 and IE9 I got two requests:
POST /base/action
GET /base/#!/action
So parameters from the first post are tragically lost...
The simple solution is just to send POST /base/#!/action on ie8 and 9 (so there is no redirection), but since this is partially out of our control.
I was wondering if there is a solution to send a POST for the request 2., even if some server side processing (for example to expose in JS POST data) is needed.
EDIT:
Some more details on the scenario. I am talking about the first request we get to our site. The request is done by a third party, and it must be done using POST. As a result the user should see the first page of our angular application. I am not talking about a request made with $http, and this is not a call to an API. The expected result is an HTML page. The POST request is coming from another page, using "classical" form submit method.
I finally decided to use fallback mode #! on all the borwsers, so I do not have this issue and I keep things simple.
I'm running some integration tests with Jasmine and a custom angular-mocks module (to allow real HTTP calls).
When doing a $http.delete (HTTP DELETE) on a URL, the call is successful (backend received it) but on PhantomJS's side I don't get any callback (the promise aint resolved).
I'm not having this problem with Chrome nor firefox therefore I'm suspecting PhantomJS to be buggy somehow.
Any idea if there is something I could do ?
PS: I already filed an issue on PhantomJSs GitHub.
Well actually it wasn't no callback at all. After increasing jasmine's timeout and karma's captureTimeout to 60s and waiting for 20sec I finally got my callback.
But It's kinda weird it take so much time with Phantom.
I have done a simple SPA with AngularJS/signalR that send a notification to my hub Hello when the app starts.
On the client side I have list of notifications managed by a controller. This list got updated only once whereas I have been able to see that my callback is called every time thanks to the console.log (and the debugger shows that this.messages grows at every notification received).
I don't get why the UI only update on first call (which the one the current client have emittted)
Here is the code that work only once.
NotificationCtrl.prototype.hello = function() {
console.log("hello");
this.messages.push(new Notification("Tom", "Now", "Is now connected"));
}
You need to $scope.$apply because the callback is not scope aware since its not called from angular.
I think you would benefit alot from my library called SignalR.EventAggregatorProxy
Its designed around the Event aggregation pattern and is perfect for MV* enabled sites with Knockout or Angular.
Have a look at the wiki for setting it up
https://github.com/AndersMalmgren/SignalR.EventAggregatorProxy/wiki
Demo project
https://github.com/AndersMalmgren/SignalR.EventAggregatorProxy/tree/master/SignalR.EventAggregatorProxy.Demo.MVC4
Recent blog post I did about it
http://andersmalmgren.com/2014/05/27/client-server-event-aggregation-with-signalr/
Install using nuget
Install-Package SignalR.EventAggregatorProxy