Why does Script Evaluation take so long? - reactjs

I'm using gatsbyjs and tried to optimize my webpage (https://www.rün.run) as much as possible. Running PageSpeed gives me some decent results. What I'm wondering is, why Script Evaluation is taking so long? My js bundle size is only 257 kb (gzipped) large.
It looks like the react hydration is taking the time. So is it because of react? Or has my DOM tree to many elements?
Direct link to PageSpeed: https://developers.google.com/speed/pagespeed/insights/?url=https%3A%2F%2Fwww.xn--rn-xka.run%2FKalista%2Fadc&tab=mobile
My goal is also to get only mobile a value of 100. How can I improve any further?

Yes, it seems that rehydrate takes this time. I have tried it with simple component which contains just a text, and still, there is some (quite long for that case) time for main-thread work & JavaScript execution time.
And the same thing if you will test https://store.gatsbyjs.org/
But anyway, even due to this issue results with gatsby really great

Related

How to stop Next duplicating HTTP requests for my project? Huge payloads affecting performance

Been really loving Next so far but I'm running into this issue where my static requests are being duplicated( as per first screenshot below from page speed insights--this is also just a subset of what's happening as it's also duplicating CSS files).
I couldn't figure out why this is happening so I created a fresh new next project and the same issue is happening (screenshot 2 directly from inspecting browser and looking at the individual network requests).
The new project is extremely minimal with barely any code. I made sure that I'm not importing anything twice or in different places. What could be causing this to happen? (For reference, I copied most of the code to Gatsby/Firebase and when I deployed it, this issue doesn't happen even though it's almost exactly identical code).
Any help appreciated. Thanks.
If anyone else runs into this issue, as of this moment this is a bug to do with the automatic preloading prop of the component from Next.js.

Why is react lazy adding time to first load

I have a small but fun to develop app. It was a fast experiment to learn a bit more about redux and react and I got to the point that I consider the app mature enough to start learning about optimization.
I did some pure component optimization attemts, but they didn't improve the time to first load, so I move on.
The next optimization I tried was using react lazy for lazy load some components that I don't need at first time. For example, I have an error component that I only need if I have to show an unlikely error, so that is what I did, and surprisingly (and according to lighthouse) all the first time metrics (time to first interactive, time to first meaningful paint etc) got way worse.
Here is an screenshot of the report before trying to use react lazy:
As you can see, from the performance point of view there was not much to improve, but as I was trying to learn modern react, I tried anyway. Here is the best I have been able to get using react lazy to split one component:
As you can see, way worse. The problems it detected were not all related to caching policies, here they are:
Seems that the main thread is getting busier to parse all the javascript. It makes no sense to me, because going to Chrome dev-tools and inspecting on detail the network requests (on the performance tab) the resulting bundles are downloaded in parallel. However the bundles on both versions are almost the same size, except that the split version of the application has 5 chunks instead of 2:
First bundle without code split
URL
bundle.js
Duration
45.62 ms
Request Method
GET
Priority
High
Mime Type
application/javascript
Encoded Data
6.5 KB
Decoded Body
31.0 KB
First bundle with react lazy split
URL
bundle.js
Duration
28.63 ms
Request Method
GET
Priority
High
Mime Type
application/javascript
Encoded Data
7.1 KB
Decoded Body
33.7 KB
First downloaded chunk:
Network request
URL
0.chunk.js
Duration
255.83 ms
Request Method
GET
Priority
High
Mime Type
application/javascript
Encoded Data
579 KB
Decoded Body
2.7 MB
First chunk with react lazy split(is labeled 5 but it is actually the first):
URL
5.chunk.js
Duration
276.40 ms
Request Method
GET
Priority
High
Mime Type
application/javascript
Encoded Data
559 KB
Decoded Body
2.6 MB
My conclusion is that react lazy introduces a significant overhead that only pays off if the size of the loaded components is big enough.
HoweVer, does that mean that big aplications can never score high on first paint ?
I made some bigger apps with VUE that got almost 90 on performance, so I'm pretty sure I'm doing something wrong here.
Something to mention is that the first screenshot is being served from github pages while the second is being served locally, but that should not influence the problem at hand, should it ?
The code for the non split version of the app is publicly available here: https://github.com/danielo515/itunes
The biggest time consumption in the “Script Evaluation” 1.672ms. So, try reducing this time.
Analyze size of JavaScript, which libraries you can replace by the smaller
version or use pure JavaScript. If you use CRA try Analyzing the Bundle
Size or webpack-bundle-analyzer. For example instead of lodash
maybe you can use smaller library lodash-es;
Use server-side rendering. Consider use loadable-components
(advice from React doc). But if you use slow server (or low level of cashing) has possibility increase a value of "Time to First Byte";
Use Pre-Rendering into Static HTML Files.
Also, a very useful tool for web-page speed analyze is webpagetest.org.

Angular JS app takes 3 minutes to load on Internet Explorer browser

Background:
I have built a tool using angularjs. The user is able to view a dynamic page based on the data. The page can be really long with lots of various types of fields within it. There are many and various angularjs components used within the app.
Issue:
If a user has got lots of data (which is shown within various input fieds/date fields etc; around 500 fields ) then the page takes around 3 minutes to load in IE browser (IE11 is my browser). The same page loads within 20 seconds on chrome and firefox. I have tried now almost a month to find the issue ..but still no luck.
I am very desperate for a solution. I coudn't find any tool that would show me what is causing my page to take so long to load.
Well, first things first, you'll need to profile what is actually taking so long. I suggest you check out
https://github.com/angular/batarang
to do this. A good article that goes over it's use is available at http://daginge.com/technology/2013/08/11/debugging-performance-problems-in-angularjs-with-batarang/ -
It's too long to include in this answer, but the general flow of resolving this is probably going to involve streamlining the watchers involved in each of those component fields etc - rendering 500 fields at once seems somewhat unnecessary - pagination would probably be an easy fix. Limit the data to maybe 50 fields per page? You'll need to track if it's the actual queries you're running to get the data that is taking so long, although based on the difference betweeen IE and Chrome, I would guess it's going to be something in the browser. It could also be that IE is being forced to use some polyfills for the functionality that Chrome and FF are supplying natively. Maybe link to your repo for us to have a look at?

Angular directives causing long load times

We currently have a large complex application build using Angular (1.3).
I have a page that is maybe using 20 custom directives (nested).
I'm finding the load time (angular bootstrap) is very slow especially on android.
Using chrome timeline profiling I can see the angular bootstrap is taking about 800ms on desktop, but about 8 seconds on android (using remote debug).
This is on a fairly new android phone (samsung s5). However on an iPhone 5 (it takes no more than 4 seconds.
My question is does directive compilation have to take that long? I don't think my directives link functions are actually taking long to execute. Will replacing directives withe a combination of ng-include/ng-controller make it better? Will replace 20 directives with one big directive make a difference?
Why would the mobile chrome browser be so less performant than iOS safari browser and very similiar hardware?
Thanks,
It is likely due to the high number of requests a browser makes to render your website. Each directive has a template, so that's 20 requests on top of all the others (js files, images, css etc). You also have to consider that each directive/template is loaded when its parent directive/template is loaded since they are nestled, and this can make a huge impact on performance.
It may also load faster on desktop and varies on mobile based on a browsers simultaneous connection limit. I believe for iOS safari it is 6, and mobile chrome is 4.
You need to reduce the amount of template loading, and ensure you are making use of a browsers cache. In your case it may be better to combine some directives to improve performance.
Your fisrt question
My question is does directive compilation have to take that long?
No. Angularjs can handle much more than 20 custom directives quickly according to my experience.
I don't think my directives link functions are actually taking long to execute.
I think you may run into issues and need to spend time on optimization probably. Maybe too much DOM manipulating or HTML reflow.
Your second question
Will replacing directives withe a combination of ng-include/ng-controller make it better?
I don't think so. ng-include/ng-controller are also directives. There is no essential difference.
Your third question
Will replace 20 directives with one big directive make a difference?
No. The number of directives doesn't matter. What matter is what directive does.
And your last question. It's a fact sucks and torments me long time. I also don't know why.

Why is React's filesize so big given its small API?

Here are the numbers
min+gzip 26k
gzip 90k
original 450+k
And React doesn't have many features in it's documentation. Why is it so big?
I have a feeling that it's the implementation of lightweight DOM. But I want to be sure.
React does quite a bit! The biggest nonobvious part of React is probably the events system -- not only does React implement its own event dispatching and bubbling, it normalizes common events across browsers so that you don't need to worry as much about it. For example, SelectEventPlugin is a built-in event "plugin" which provides an onSelect event which behaves the same way in all browsers.
The virtual DOM implementation does take a decent amount of code as well; a lot of effort is spent on performance optimization, which is why the unminified version includes ReactPerf, which is a tool for measuring rendering performance. In updating the DOM, React also does some convenient things for you like maintaining any input selection and keeping the current scroll position the same.
React also has a few other tricks up its sleeve. One of the coolest is that it fully supports rendering a component to an HTML string even if you don't have a browser environment, so you can send down a page that works even before JS is loaded.
What are you comparing React against? react-15.0.2.min.js is 43k (gzipped), but jQuery is 33k while ember-2.6.0.prod.js is 363k (also gzipped). Obviously these frameworks don't do exactly the same things but they do have a lot of overlap, so I think the comparison is reasonable.
If you're worried about download size, I wouldn't worry too much about JS code contributing to that. Here's an ad which I see on the right side of my Stack Overflow page right now:
Its download size is 95k -- I wouldn't think twice about including an image like that in a page because (even if I was worried about performance) there are usually so many other performance-related things to fix that are more lucrative.
In short, I don't think React is that big, and what size it does have comes from the many small things it does to help you out. You cite React's small API as a reason that React's code should be small, but a better question might be, "How is React's API able to be so simple given all the things it does for you?"
…but that's a totally separate question. :) Hope I answered your question -- happy to expand if I didn't.
A couple of thoughts.. I had some of the same concerns with it's size, but after using it, no joke, I would use it if it was 5MB in size. It's just that good. That said, I did decide to reduce as many dependencies on other libraries as possible. I was using jquery for two things.. ajax and the automatic ajax response and request handling (beforeSend, etc) that would handle when a token was in a response after login, and then make sure every ajax request added it to the Authorization header before sending. I replaced this with a super small simple bit of native javascript. Works great. Also, I was trying to use _underscore. I've replaced it with lodash, which is smaller and faster, although presently I am not using it so may remove it altogether.
Here is one article, of many on google, that I found that has some alternatives using native JS over jquery.
http://www.sitepoint.com/jquery-vs-raw-javascript-1-dom-forms/

Resources