I'm having an issue where a long synchronous request will freeze Internet Explorer.
Let me explain the context : this is a web application which only supports IE8 and which can only use Synchronous*.
We have a Silverlight component with a save button. When the user presses the button, some information is sent to the server using a synchronous XMLHttpRequest, then some other actions are done client-side in the Silverlight component.
The server-side part includes calculations that will sometime take a while (several minutes).
In short, here is the code (c# silverlight part)
ScriptObject _XMLHttpRequest;
_XMLHttpRequest.Invoke("open", "POST", url, false);
_XMLHttpRequest.Invoke("send", data);
checkResponse(XMLHttpRequest);
doOtherThings();
I know that the server does its work properly because I can see in the verbose logs, the end of the page rendering for the "url" called from Silverlight.
However, in debug mode I can see that I never reach the "checkresponse" line. After calling the "send" line, IE will freeze forever, not unfreezing once the server log shows that "url" has been processed.
Also, I tried to add "_XMLHttpRequest.SetParameter("timeout", 5000)" between the "open" and the "send" lines. IE freezes for 5 seconds, then "checkresponse" and "dootherthings" are executed. Then IE freezes again while server-side calculations are processed and doesn't unfreeze once the server is done with its work.
IE timeout is supposed to be 3 hours (registry key ReceiveTimeout set to 10800000), and I also got rid of IE 2-connexions limit (MaxConnectionsPer1_0Server and MaxConnectionsPerServer set to 20).
Last important information : there is no issue when the server-side part only takes a few seconds instead of several minutes.
Do you know where the freeze could come from (IE bug, XMLHttpRequest bug, something I have done wrong) and how I can avoid this ?
Thank you !
Kévin B
*(while trying to solve my issue with the help of Google I found an incredible amount of "use asynch" and "synch is bad" posts; but I can't do this change in my app. Switching the application, ajax loads, and all server side calculations to asynchronous is a huge work which has been quoted for our client and is a long-term objective. I need a short-term fix for now)
Silverlight virtually requires that everything be done asynchronously. Any long running synchronous process will hang the browser if run on the UI thread. If you never reach the 'checkResponse' line of code it is possible that an unhandled exception was thrown on the previous line, and it is being swallowed. You can check in the dev tools of your browser to see if there are any javascript errors. I am surprised that calling XMLHttpRequest synchronously works at all since I would expect it to lock up the UI thread. But, the solution depends on your definition of async.
You can try:
calling the sync XHR request on a background thread and then marshalling to the UI thread (eg with Dispatcher.BeginInvoke) when you are ready
setting up an XMLHttpRequest wrapper that makes the call in async mode and raises a callback in Silverlight on completion
using HttpClient or WebClient
While these options are async, they don't require your server code to be written any differently (it can stay synchronous). I have seen a web server process a call for over an hour before finally returning a response, at which time the Silverlight app raised a callback. You could even use tools like the TPL await/async, or co-routines available in many mvvm frameworks to make the code appear very procedural/synchronous while still performing its actions asynchronously.
I hope you already solved your issue, but maybe this will be helpful to whomever may come across this post.
Related
So I'm not sure if there is truly a way to make this happen.
Basically I have an angular app with a nodejs backend that shows longrunning queries on our production environment. This is set on an interval to refresh this info every 20 seconds. The issue is this:
Buttons on the page, if clicked when a refresh is occurring, will not fire off. They have to wait until the whole stream of calls to refresh the data have finished, and even then sometimes, they do not fire.
If I happen to catch it while no refresh is occurring, the button works as expected.
Is there a way to force this action to take precedence over the refresh so it occurs immediately even if there are other processes occurring?
Thanks!
Based on your clarification in your comments, it sounds like you're hitting your browser's limit on how many connections can be made to any given hostname. Look at the Connections per Hostname column here.
To fix this, send less calls at once. For example, you could implement your own queuing system for your refresh-related calls, instead of firing them off all at once, leaving plenty of connections open for other requests (like your button).
I am developing an Angular JS application using ionic. For android, I am using crosswalk for better performance.
I've noticed that when running on Android, I am facing problems with http requests getting stuck when trying to load large images - if any request gets "stuck" -- i.e. no error, but in my chrome developer inspector, I see the http request as "pending" -- then all subsequent requests go into "pending" state too. This problem does not exist in iOS
The code is pretty simple:
<span ng-repeat="monitor in monitors">
<img ng-src="http://server.com/monitorId=monitor?view=jpg" />
</span>
This results in around 6 GETs of images of size 600x400 and the images keep changing (the server keeps changing the image)
What I've observed specifically with Android is after a few successful iterations, the network HTTP GET behind this img ng-src gets stuck in pending like I said above and then all subsequent HTTP requests also get into pending and none of them ever get out of that state.
I am guessing there is some sort of limit for network queue that is getting filled up.
So how do I solve this issue?
a) One way I could think of is to put a timeout -- but ng-src does not seem to have a time out function. My thought is on timeout, the http request would cancel - like in normal $http.get functions and this should help.
b) Maybe there is a way to flush all http requests. I saw in SO, someone created a new directive which needs to be added here: AngularJS abort all pending $http requests on route change --> but this needs me to replace http with this new directive --> while I am using img ng-src
c) Neither a nor c are ideal. I'd like to know what is really going on - why does Android balk at this while iOS does not (comparing Galaxy S3 with iPhone 5s). So if you have any other solutions, I'd love to hear them
thanks
Wow, this was quite a learning. I managed to implement a work-around.
Edited: For those who think this is due to the limitation of 6 connections- please go through https://code.google.com/p/chromium/issues/detail?id=234779
The problem specifically is Chrome (At least with crosswalk, and maybe chrome in general) has a problem if you open multiple streams of HTTP connections that don't close for a long time. In my case the "img-src" was pointing to an image URL that the server was changing 3 times a second. Each image takes a second or two to download, so data keeps streaming in.
There is something about this that puts Chrome in a tizzy and it starts getting into an eternal pending loop for any HTTP requests after the first pending - even unrelated HTTP requests
Fortunately, I managed to implement a workaround: The server had an option to just get one image (not dynamic). I used that URL and the implemented a $interval timer in that controller that would refresh that URL every second - effectively retrieving images every second (or any other timer value I want)
Chrome has NO problem dealing with the HTTP requests in this way because they are getting closed predictably, even if it means more HTTP requests.
Phew. Not the solution I'd want, but it works very well.
And the gallant iOS handles this well too (it handled the original scenario perfectly too)
When making a call with the $http service, if the user navigates away from the page while the call is still in progress, it seems the HTTP request is aborted and the promise calls its error() handler. However, I can't seem to figure out how to detect when this error is triggered by an actual problem getting a response from the remote server (server down, network down, timeout, etc), versus when the request is simply cancelled because the user navigated. Both cases appear to pass in the same virtually empty response state arguments to the error handler. (data is empty, status is 0, etc).
I considered using a beforeUnload event handler to detect navigation and assume any error encountered after that event should be ignored, however, this event is not supported in mobile Safari and perhaps other browsers, so it's a not a reliable solution.
This seems like it would be a common issue - how are folks generally dealing with it?
HTML5 provides a convenient History API to deal with this kind of situation. However it varies from browser to browser. In order to ensure (up to a point) cross-browser compatibility I'd recommend you to use the library History.js.
The library provides a useful event you can handle: statechange
For your use case you could warn the user that a request has been initiated and respond to that action accordingly.
https://github.com/browserstate/History.js/
I implemented the observer design pattern in my application, but my app sends to an remote server requests via http protocol that take some time to resolve.
So, naturally, I did the sending an receiving part in a separate thread.
Can you please tell me how to make an window that observes the RequestObject to modify it's state based on the state of the request?
In the debugger step by step mode the window runes the code that I want it to do, but the window never refreshes its self.
Since I don't have a sample of your code I don't know the exacts of how you are updating your UI. If you are attempting to update the UI in the seperate thread that could be your issue. This may be of some help. http://msdn.microsoft.com/en-us/magazine/cc188732.aspx
You may also consider using the Task Parellel Library to perform your asyc operations.
http://msdn.microsoft.com/en-us/library/dd997423.aspx
The simple thing of calling FB.init (right before </body>) and then FB.getLoginStatus(callback) doesn't fire the callback function.
After some debugging, I think the SDK is stuck in the "loading" (i.e. FB.Auth._loadState == 'loading') phase and never gets to "loaded", so all callbacks are queued until the SDK has loaded.
If I force-fire the "loaded" event during debugging - with FB.Event.fire('FB.loginStatus', 'loaded') in case you're intersted - then the callbacks are invoked correctly.
Extra details that might be relevant:
My app is a facebook iframe app (loaded via apps.facebook.com/myapp)
I'm using IE9. The same behavior happens in Chrome
The app is hosted in http://localhost
What's going on? Why is the SDK never gets to loaded?
Thanks
UPDATE: Just tried it on Chrome and it worked (not sure why it didn't work before). Still doesn't work in IE
I had this same problem in Firefox 3.5 on Windows, but only on the very first log in to the page (probably because it was a slower machine and there was some weird timing issues going on).
I fixed it by forcing FB to refresh the login status cookie every time it checks:
FB.getLoginStatus(callback, true); //second argument forces a refresh from Facebook's server.
Without "force=true", sometimes it wouldn't fire the callback.
I had the exact same problem, and I solved it disabling "Secure Browsing" in the Facebook Security settings. Keeping Secure Browsing on forces the pages as "https", but I had no "Secure Canvas URL" set up, and this gave me a lot of errors in the console as well.
Hope this may help someone :)
In my experience, getLoginStatus() never calls the callback in Firefox when third-party cookies are disabled.
The original poster mentioned his application is hosted on http://localhost. I've never had luck with that, and believe it will cause problems.
Just today, I've had problems where getLoginStatus is not calling the callback on any browser, unless the user is actually connected to the app! I'm hoping this is a bug on facebook's end that they will solve.
Yet another possibility for FB.getLoginStatus not firing its callback is when using a "test" user account that has not been authorized to view that application. Its pretty bad that facebook doesn't give you any error messages.
I have also seen failed callbacks on bad appIds and redirectUrls.
I also ran into this issue specifically in Chrome. I tried calling it on page load and after a user-initiated action with no success.
It turned out that it was not a cross-domain issue. The getLoginStatus() call was being blocked by the Un-Passwordise extension in Chrome. As soon as I disabled the extension, it worked perfectly, even on page load.
More info about this issue here: Chrome-only cross-domain scripting errs in Facebook iFrame App upon FB.Login(..)
I understand that this question is a little old now, but I ran across it searching for solutions.
Double-check what you have set in your Facebook app configuration under the section "Website with Facebook Login". The Site URL domain must match the domain your page with the FB.getLoginStatus (and other related auth Javascript) is served from.
After hours of struggling, I realized that I could not reuse an existing app configuration I had on a new server and had to create a new app to handle the website login for this new server.
The other answers are probably equally valid in your specific case, but since there may be others like me who have struggled for a while on this, hopefully this gives you one other place to check. Making a new app with the correct Site URL was the answer in my particular case.