Channel API - sometimes I don't get message - google-app-engine

I noticed that every createchannel() replaces the iFrame url.
Is there any chance that due to a re-call to createChannel() my iFrame is being replaced by new iFrame BUT the binding between the clientID and the iFrame url wasn't updated?
For example:
I called "channel.create_channel(unique_id)" - and I got back the JS with 123.talkgadget.google....as an iFrame.
Then,
I called again with the same client id "channel.create_channel(unique_id)" - and I got back the JS with 456.talkgadget.google....as an iFrame.
Is there any chance that if I call now "channel.send_message(unique_id,msg)"
the message will be sent to 123.talkgadget.google instead of 456.talkgadget.google resulting that I didn't get the message?
Thanks!

I'm not 100% certain about this answer. I haven't tested thoroughly, it's a little hard to test since the dev_appserver behavior is quite different from the real server behavior.
I believe I have seen this behavior before (missing messages).
If you close the old channel from the client side it seems to make everything work properly.
I have not tried handling the case where you lose your internet connection and you can't close from the client side though.

Related

Angular request issues

I got an old project based on angular v1.7.8 and have a simple task to add one more input similar to existing input on this form. Sounds easy, right?
I did it, it was almost the same so I just added new input the same way as old one was added, and it mostly works except one weird thing.
The current state shows well, but when you try to save it - only unchanged value is being sent. Like it was 1, then you change it to 2, you can see that it is 2 in request payload, it is 2 before angular internal functions for sending ajax, but somehow (in xhr.onload function), it is already 1. And it is already 1 in chrome request preview. (The old input works as intended)
So in chrome network -> headers -> Request payload it is correct,
in chrome network -> preview it is incorrect.
I am really confused, and don't know how to fix it using angular. I can send ajax request without using angular, but I need to understand why it works this way.

Weird data binding behavior with socket.io

I am creating a socket.io app using angular 2 on the frontend and I am getting a very weird behavior that I have never seen before when working with socket.io. I have no idea if my code is causing the issue or if it is something within the interaction between angular2 and socket.io, but if it is my code, I can't say what code I might need to post.
The mysterious behavior: At first instinct, my process for testing if my sockets connections are working properly is to open up an incognito tab, go to my project site, log in as a different user and see if API requests are emitted properly across the users. However; right now EVERY action that is made on either of the users happens to the other user. EX: if I type into a form one of the clients, the other clients form will get updated with the same information. If I click the forms submit button to post the data in the form, the other clients submit button will be clicked as well. Occasionally, it happens when navigating between states, where the other client will also navigate to the state. The behavior also occurs when logging on to a completely different computer, so imagine it is an issue with how socket.io emits data.
All the clients are connecting and disconnecting appropriately and are getting assigned unique socket ID's.
Turns out, the solution was a bit simpler than I was expecting it to be. I strange behavior occurs through a conflict with npm live-server running at the same time as my socket.io connection. I still cannot explain why the conflict was manifested as this sort of strange behavior, but at least I got it to stop by running the app as an express app serving up the index.html.
If anyone could explain why this might have been happening, I would love to hear.

using img-ngsrc in Android for large dynamic images is causing future HTTP requests to get queued in pending state

I am developing an Angular JS application using ionic. For android, I am using crosswalk for better performance.
I've noticed that when running on Android, I am facing problems with http requests getting stuck when trying to load large images - if any request gets "stuck" -- i.e. no error, but in my chrome developer inspector, I see the http request as "pending" -- then all subsequent requests go into "pending" state too. This problem does not exist in iOS
The code is pretty simple:
<span ng-repeat="monitor in monitors">
<img ng-src="http://server.com/monitorId=monitor?view=jpg" />
</span>
This results in around 6 GETs of images of size 600x400 and the images keep changing (the server keeps changing the image)
What I've observed specifically with Android is after a few successful iterations, the network HTTP GET behind this img ng-src gets stuck in pending like I said above and then all subsequent HTTP requests also get into pending and none of them ever get out of that state.
I am guessing there is some sort of limit for network queue that is getting filled up.
So how do I solve this issue?
a) One way I could think of is to put a timeout -- but ng-src does not seem to have a time out function. My thought is on timeout, the http request would cancel - like in normal $http.get functions and this should help.
b) Maybe there is a way to flush all http requests. I saw in SO, someone created a new directive which needs to be added here: AngularJS abort all pending $http requests on route change --> but this needs me to replace http with this new directive --> while I am using img ng-src
c) Neither a nor c are ideal. I'd like to know what is really going on - why does Android balk at this while iOS does not (comparing Galaxy S3 with iPhone 5s). So if you have any other solutions, I'd love to hear them
thanks
Wow, this was quite a learning. I managed to implement a work-around.
Edited: For those who think this is due to the limitation of 6 connections- please go through https://code.google.com/p/chromium/issues/detail?id=234779
The problem specifically is Chrome (At least with crosswalk, and maybe chrome in general) has a problem if you open multiple streams of HTTP connections that don't close for a long time. In my case the "img-src" was pointing to an image URL that the server was changing 3 times a second. Each image takes a second or two to download, so data keeps streaming in.
There is something about this that puts Chrome in a tizzy and it starts getting into an eternal pending loop for any HTTP requests after the first pending - even unrelated HTTP requests
Fortunately, I managed to implement a workaround: The server had an option to just get one image (not dynamic). I used that URL and the implemented a $interval timer in that controller that would refresh that URL every second - effectively retrieving images every second (or any other timer value I want)
Chrome has NO problem dealing with the HTTP requests in this way because they are getting closed predictably, even if it means more HTTP requests.
Phew. Not the solution I'd want, but it works very well.
And the gallant iOS handles this well too (it handled the original scenario perfectly too)

IE8 freeze caused by long synchronous xmlhttprequest from silverlight

I'm having an issue where a long synchronous request will freeze Internet Explorer.
Let me explain the context : this is a web application which only supports IE8 and which can only use Synchronous*.
We have a Silverlight component with a save button. When the user presses the button, some information is sent to the server using a synchronous XMLHttpRequest, then some other actions are done client-side in the Silverlight component.
The server-side part includes calculations that will sometime take a while (several minutes).
In short, here is the code (c# silverlight part)
ScriptObject _XMLHttpRequest;
_XMLHttpRequest.Invoke("open", "POST", url, false);
_XMLHttpRequest.Invoke("send", data);
checkResponse(XMLHttpRequest);
doOtherThings();
I know that the server does its work properly because I can see in the verbose logs, the end of the page rendering for the "url" called from Silverlight.
However, in debug mode I can see that I never reach the "checkresponse" line. After calling the "send" line, IE will freeze forever, not unfreezing once the server log shows that "url" has been processed.
Also, I tried to add "_XMLHttpRequest.SetParameter("timeout", 5000)" between the "open" and the "send" lines. IE freezes for 5 seconds, then "checkresponse" and "dootherthings" are executed. Then IE freezes again while server-side calculations are processed and doesn't unfreeze once the server is done with its work.
IE timeout is supposed to be 3 hours (registry key ReceiveTimeout set to 10800000), and I also got rid of IE 2-connexions limit (MaxConnectionsPer1_0Server and MaxConnectionsPerServer set to 20).
Last important information : there is no issue when the server-side part only takes a few seconds instead of several minutes.
Do you know where the freeze could come from (IE bug, XMLHttpRequest bug, something I have done wrong) and how I can avoid this ?
Thank you !
Kévin B
*(while trying to solve my issue with the help of Google I found an incredible amount of "use asynch" and "synch is bad" posts; but I can't do this change in my app. Switching the application, ajax loads, and all server side calculations to asynchronous is a huge work which has been quoted for our client and is a long-term objective. I need a short-term fix for now)
Silverlight virtually requires that everything be done asynchronously. Any long running synchronous process will hang the browser if run on the UI thread. If you never reach the 'checkResponse' line of code it is possible that an unhandled exception was thrown on the previous line, and it is being swallowed. You can check in the dev tools of your browser to see if there are any javascript errors. I am surprised that calling XMLHttpRequest synchronously works at all since I would expect it to lock up the UI thread. But, the solution depends on your definition of async.
You can try:
calling the sync XHR request on a background thread and then marshalling to the UI thread (eg with Dispatcher.BeginInvoke) when you are ready
setting up an XMLHttpRequest wrapper that makes the call in async mode and raises a callback in Silverlight on completion
using HttpClient or WebClient
While these options are async, they don't require your server code to be written any differently (it can stay synchronous). I have seen a web server process a call for over an hour before finally returning a response, at which time the Silverlight app raised a callback. You could even use tools like the TPL await/async, or co-routines available in many mvvm frameworks to make the code appear very procedural/synchronous while still performing its actions asynchronously.
I hope you already solved your issue, but maybe this will be helpful to whomever may come across this post.

FB.getLoginStatus never fires the callback function in Facebook's JavaScript SDK

The simple thing of calling FB.init (right before </body>) and then FB.getLoginStatus(callback) doesn't fire the callback function.
After some debugging, I think the SDK is stuck in the "loading" (i.e. FB.Auth._loadState == 'loading') phase and never gets to "loaded", so all callbacks are queued until the SDK has loaded.
If I force-fire the "loaded" event during debugging - with FB.Event.fire('FB.loginStatus', 'loaded') in case you're intersted - then the callbacks are invoked correctly.
Extra details that might be relevant:
My app is a facebook iframe app (loaded via apps.facebook.com/myapp)
I'm using IE9. The same behavior happens in Chrome
The app is hosted in http://localhost
What's going on? Why is the SDK never gets to loaded?
Thanks
UPDATE: Just tried it on Chrome and it worked (not sure why it didn't work before). Still doesn't work in IE
I had this same problem in Firefox 3.5 on Windows, but only on the very first log in to the page (probably because it was a slower machine and there was some weird timing issues going on).
I fixed it by forcing FB to refresh the login status cookie every time it checks:
FB.getLoginStatus(callback, true); //second argument forces a refresh from Facebook's server.
Without "force=true", sometimes it wouldn't fire the callback.
I had the exact same problem, and I solved it disabling "Secure Browsing" in the Facebook Security settings. Keeping Secure Browsing on forces the pages as "https", but I had no "Secure Canvas URL" set up, and this gave me a lot of errors in the console as well.
Hope this may help someone :)
In my experience, getLoginStatus() never calls the callback in Firefox when third-party cookies are disabled.
The original poster mentioned his application is hosted on http://localhost. I've never had luck with that, and believe it will cause problems.
Just today, I've had problems where getLoginStatus is not calling the callback on any browser, unless the user is actually connected to the app! I'm hoping this is a bug on facebook's end that they will solve.
Yet another possibility for FB.getLoginStatus not firing its callback is when using a "test" user account that has not been authorized to view that application. Its pretty bad that facebook doesn't give you any error messages.
I have also seen failed callbacks on bad appIds and redirectUrls.
I also ran into this issue specifically in Chrome. I tried calling it on page load and after a user-initiated action with no success.
It turned out that it was not a cross-domain issue. The getLoginStatus() call was being blocked by the Un-Passwordise extension in Chrome. As soon as I disabled the extension, it worked perfectly, even on page load.
More info about this issue here: Chrome-only cross-domain scripting errs in Facebook iFrame App upon FB.Login(..)
I understand that this question is a little old now, but I ran across it searching for solutions.
Double-check what you have set in your Facebook app configuration under the section "Website with Facebook Login". The Site URL domain must match the domain your page with the FB.getLoginStatus (and other related auth Javascript) is served from.
After hours of struggling, I realized that I could not reuse an existing app configuration I had on a new server and had to create a new app to handle the website login for this new server.
The other answers are probably equally valid in your specific case, but since there may be others like me who have struggled for a while on this, hopefully this gives you one other place to check. Making a new app with the correct Site URL was the answer in my particular case.

Resources