Unknown websocket slowing site drastically - reactjs

In a large react project using Webpack I observe a websocket connection that is causing the site to become exceptionally and annoyingly slow. I have tried searching but cannot find the source. I optimisedapi calls and cut them down but this specific websocket call persists and does not get less. Page load is 1.2 s but the finish when the actual loading happens is over 20 s later.
[
I wanted to know
1] Does anyone recognize what is happening or it may be a standard method of downloading content ?
2]How can I find what is making this call.
Many thanks

Is this a screenshot in production?
Because the initiator is browser-sync-client.js.
BrowserSync uses a websocket connection for hot updating during development.
The initial load states 1.63s which is recorded, when the page loads initially.
If you are still in development, then there is nothing to worry about.

Related

Vercel/Next.js (seeminlgy) randomly returns 404s with {"notFound": True}

Intro
Apologies for not being able to provide a reproducible example. Our team is not able to reproduce the bug reliably. We have been investigating the bug for almost a week now, but can't seem to make any headway. We just rolled out our next.js based headless Shopify store (i.e. use next.js for the frontend and Shopify for everything starting from the checkout).
This bug is the strangest thing I have seen with next.js so far and any pointers towards solving the problem are more than appreciated.
Note:
You can navigate to www.everdrop.ch/it and open console to see some some broken links. However, since this is production, we obviously try to fix them as soon as possible.
Problem:
Almost every time we deploy a new version we would get to see some seemingly random 404s in the console, for when next is trying to prefetch Links.
The 404's are always of the form https://domain/_next/data/<DEPLOYMENT>/<PATH>/slug.json where sometimes PATH is e.g. category-pages and sometimes it's empty.
Observation 1
When clicking one of the broken links in console (the .json, I would get a 404:
Navigating to the broken pages on client side will also give a 404
However, when curl -I -L I would get a 200
Observation 2
When checking the Output data in Vercel
everything works like a charm
Note that the URL is different though. It is the same deployment but at a different URL.
Observation 3
The affected Links are seemingly random. However, some seem to be more likely to be affected than others.
Observation 4
Navigating to the page and then refreshing or directly accesing the page does produce the properly rendered page. Suprisingly enough this also results (for most pages that is) in a disapearance of the initital error.
Observation 5
Rerunning the deployment on vercel oftentimes fixes the problem and many of the broken links will then randomly work. Sometimes this leads to other broken links though.
Background & Stack
We use Storyblok and Shopify as data providers to query during build time. Shopify for product data and Storyblok for page and content data. All affected pages so far have been pages where we pull data from Storyblok during build time (which are all pages other than search and product pages).
We use next i18next for multi-language localisation. We use ENV variables to control where data is coming from to build our different stores.
Many people have reached out to me on LinkedIn asking how we ultimately solved the problem at hand.
I think, generally speaking, the problem occurs when at build-time, a page build fails (i.e. when you are running into API limits). This is especially problematic in combination with
fallback: true (https://nextjs.org/docs/api-reference/data-fetching/get-static-paths#fallback-true)
As I think, pages that were built but failed will not get updated later on.
Our Solution
For us, we were able to solve it with:
preventing errors at build-time (we implemented a cache, but your errors might be different)
setting revalidate param, so that even if pages fail, they will get rebuilt
fallback: blocking (https://nextjs.org/docs/api-reference/data-fetching/get-static-paths#fallback-blocking)
notFound: true (https://nextjs.org/docs/api-reference/data-fetching/get-static-props#notfound)

How does whatsappweb deliver updates?

I'm wondering how does whatsappweb deliver updates?
Do you ever notice a left green card appearing sometimes and asking you to click in a link to refresh page and run the new whatsappweb fresh code updated.
I'm almost sure they use webpack, service workers etc.
Chances are that you already had cache problems using webpack where even refreshing page it remains cached.
So how does whatsappweb solved this issue with a single refresh link?
They use a service worker, if the service worker gets updated, they trigger something in the react app, is easy to do it.
serviceWorker.register({ onUpdate: () => {console.log('new service worker')}});
just dispatch something instead of the console.log
Webpack is a building tool and isn't involved anywhere on a live site. While it offers Hot Module Reload for the development server you will not get it on the production version.
Unlike traditional desktop applications, delivering updates for websites is as straightforward as updating the files on your server (and invalidating any browser caches). You don't need to notify the user to download something, a simple refresh will get the new pages.
If you really want instantaneous updates (without waiting for the user to refresh the page) you can create some sort of WebSocket communication which when a message is received triggers a browser refresh. Nothing special and no deployment mechanisms involved.

Is there a way to control the Service Worker in create-react-app in terms of fetching - it is caching 2000 HTTP requests after initial load

My App I am using Create-React-App, it works 100% and passes all PWA Lightspeed tests.
My App has 1000x Audio clips (small in size) and 1000x SVGs (small in size), however they amount to about 10MB.
I enable the service worker as I want a PWA i.e. install-able. I have tried using lazy/suspense but this only worked for loading that particular component.
It showed "still loading", as the FETCH efforts were still happening in the background. Somebody on 3G and slower connections, and older phones as I have tested, it just crashes.
Is there a way I can still have a PWA and have some control on how quickly it tries to FETCH the rest of the assets.
I had react-snap also installed to try and help, but this is as I realized not related.
The Fetch needs to be more "gentle" or something. How should I go about handling this situation?

using img-ngsrc in Android for large dynamic images is causing future HTTP requests to get queued in pending state

I am developing an Angular JS application using ionic. For android, I am using crosswalk for better performance.
I've noticed that when running on Android, I am facing problems with http requests getting stuck when trying to load large images - if any request gets "stuck" -- i.e. no error, but in my chrome developer inspector, I see the http request as "pending" -- then all subsequent requests go into "pending" state too. This problem does not exist in iOS
The code is pretty simple:
<span ng-repeat="monitor in monitors">
<img ng-src="http://server.com/monitorId=monitor?view=jpg" />
</span>
This results in around 6 GETs of images of size 600x400 and the images keep changing (the server keeps changing the image)
What I've observed specifically with Android is after a few successful iterations, the network HTTP GET behind this img ng-src gets stuck in pending like I said above and then all subsequent HTTP requests also get into pending and none of them ever get out of that state.
I am guessing there is some sort of limit for network queue that is getting filled up.
So how do I solve this issue?
a) One way I could think of is to put a timeout -- but ng-src does not seem to have a time out function. My thought is on timeout, the http request would cancel - like in normal $http.get functions and this should help.
b) Maybe there is a way to flush all http requests. I saw in SO, someone created a new directive which needs to be added here: AngularJS abort all pending $http requests on route change --> but this needs me to replace http with this new directive --> while I am using img ng-src
c) Neither a nor c are ideal. I'd like to know what is really going on - why does Android balk at this while iOS does not (comparing Galaxy S3 with iPhone 5s). So if you have any other solutions, I'd love to hear them
thanks
Wow, this was quite a learning. I managed to implement a work-around.
Edited: For those who think this is due to the limitation of 6 connections- please go through https://code.google.com/p/chromium/issues/detail?id=234779
The problem specifically is Chrome (At least with crosswalk, and maybe chrome in general) has a problem if you open multiple streams of HTTP connections that don't close for a long time. In my case the "img-src" was pointing to an image URL that the server was changing 3 times a second. Each image takes a second or two to download, so data keeps streaming in.
There is something about this that puts Chrome in a tizzy and it starts getting into an eternal pending loop for any HTTP requests after the first pending - even unrelated HTTP requests
Fortunately, I managed to implement a workaround: The server had an option to just get one image (not dynamic). I used that URL and the implemented a $interval timer in that controller that would refresh that URL every second - effectively retrieving images every second (or any other timer value I want)
Chrome has NO problem dealing with the HTTP requests in this way because they are getting closed predictably, even if it means more HTTP requests.
Phew. Not the solution I'd want, but it works very well.
And the gallant iOS handles this well too (it handled the original scenario perfectly too)

IE8 freeze caused by long synchronous xmlhttprequest from silverlight

I'm having an issue where a long synchronous request will freeze Internet Explorer.
Let me explain the context : this is a web application which only supports IE8 and which can only use Synchronous*.
We have a Silverlight component with a save button. When the user presses the button, some information is sent to the server using a synchronous XMLHttpRequest, then some other actions are done client-side in the Silverlight component.
The server-side part includes calculations that will sometime take a while (several minutes).
In short, here is the code (c# silverlight part)
ScriptObject _XMLHttpRequest;
_XMLHttpRequest.Invoke("open", "POST", url, false);
_XMLHttpRequest.Invoke("send", data);
checkResponse(XMLHttpRequest);
doOtherThings();
I know that the server does its work properly because I can see in the verbose logs, the end of the page rendering for the "url" called from Silverlight.
However, in debug mode I can see that I never reach the "checkresponse" line. After calling the "send" line, IE will freeze forever, not unfreezing once the server log shows that "url" has been processed.
Also, I tried to add "_XMLHttpRequest.SetParameter("timeout", 5000)" between the "open" and the "send" lines. IE freezes for 5 seconds, then "checkresponse" and "dootherthings" are executed. Then IE freezes again while server-side calculations are processed and doesn't unfreeze once the server is done with its work.
IE timeout is supposed to be 3 hours (registry key ReceiveTimeout set to 10800000), and I also got rid of IE 2-connexions limit (MaxConnectionsPer1_0Server and MaxConnectionsPerServer set to 20).
Last important information : there is no issue when the server-side part only takes a few seconds instead of several minutes.
Do you know where the freeze could come from (IE bug, XMLHttpRequest bug, something I have done wrong) and how I can avoid this ?
Thank you !
Kévin B
*(while trying to solve my issue with the help of Google I found an incredible amount of "use asynch" and "synch is bad" posts; but I can't do this change in my app. Switching the application, ajax loads, and all server side calculations to asynchronous is a huge work which has been quoted for our client and is a long-term objective. I need a short-term fix for now)
Silverlight virtually requires that everything be done asynchronously. Any long running synchronous process will hang the browser if run on the UI thread. If you never reach the 'checkResponse' line of code it is possible that an unhandled exception was thrown on the previous line, and it is being swallowed. You can check in the dev tools of your browser to see if there are any javascript errors. I am surprised that calling XMLHttpRequest synchronously works at all since I would expect it to lock up the UI thread. But, the solution depends on your definition of async.
You can try:
calling the sync XHR request on a background thread and then marshalling to the UI thread (eg with Dispatcher.BeginInvoke) when you are ready
setting up an XMLHttpRequest wrapper that makes the call in async mode and raises a callback in Silverlight on completion
using HttpClient or WebClient
While these options are async, they don't require your server code to be written any differently (it can stay synchronous). I have seen a web server process a call for over an hour before finally returning a response, at which time the Silverlight app raised a callback. You could even use tools like the TPL await/async, or co-routines available in many mvvm frameworks to make the code appear very procedural/synchronous while still performing its actions asynchronously.
I hope you already solved your issue, but maybe this will be helpful to whomever may come across this post.

Resources