I have a small but fun to develop app. It was a fast experiment to learn a bit more about redux and react and I got to the point that I consider the app mature enough to start learning about optimization.
I did some pure component optimization attemts, but they didn't improve the time to first load, so I move on.
The next optimization I tried was using react lazy for lazy load some components that I don't need at first time. For example, I have an error component that I only need if I have to show an unlikely error, so that is what I did, and surprisingly (and according to lighthouse) all the first time metrics (time to first interactive, time to first meaningful paint etc) got way worse.
Here is an screenshot of the report before trying to use react lazy:
As you can see, from the performance point of view there was not much to improve, but as I was trying to learn modern react, I tried anyway. Here is the best I have been able to get using react lazy to split one component:
As you can see, way worse. The problems it detected were not all related to caching policies, here they are:
Seems that the main thread is getting busier to parse all the javascript. It makes no sense to me, because going to Chrome dev-tools and inspecting on detail the network requests (on the performance tab) the resulting bundles are downloaded in parallel. However the bundles on both versions are almost the same size, except that the split version of the application has 5 chunks instead of 2:
First bundle without code split
URL
bundle.js
Duration
45.62 ms
Request Method
GET
Priority
High
Mime Type
application/javascript
Encoded Data
6.5 KB
Decoded Body
31.0 KB
First bundle with react lazy split
URL
bundle.js
Duration
28.63 ms
Request Method
GET
Priority
High
Mime Type
application/javascript
Encoded Data
7.1 KB
Decoded Body
33.7 KB
First downloaded chunk:
Network request
URL
0.chunk.js
Duration
255.83 ms
Request Method
GET
Priority
High
Mime Type
application/javascript
Encoded Data
579 KB
Decoded Body
2.7 MB
First chunk with react lazy split(is labeled 5 but it is actually the first):
URL
5.chunk.js
Duration
276.40 ms
Request Method
GET
Priority
High
Mime Type
application/javascript
Encoded Data
559 KB
Decoded Body
2.6 MB
My conclusion is that react lazy introduces a significant overhead that only pays off if the size of the loaded components is big enough.
HoweVer, does that mean that big aplications can never score high on first paint ?
I made some bigger apps with VUE that got almost 90 on performance, so I'm pretty sure I'm doing something wrong here.
Something to mention is that the first screenshot is being served from github pages while the second is being served locally, but that should not influence the problem at hand, should it ?
The code for the non split version of the app is publicly available here: https://github.com/danielo515/itunes
The biggest time consumption in the “Script Evaluation” 1.672ms. So, try reducing this time.
Analyze size of JavaScript, which libraries you can replace by the smaller
version or use pure JavaScript. If you use CRA try Analyzing the Bundle
Size or webpack-bundle-analyzer. For example instead of lodash
maybe you can use smaller library lodash-es;
Use server-side rendering. Consider use loadable-components
(advice from React doc). But if you use slow server (or low level of cashing) has possibility increase a value of "Time to First Byte";
Use Pre-Rendering into Static HTML Files.
Also, a very useful tool for web-page speed analyze is webpagetest.org.
Related
I'm using DatoCMS and NextJS to build a website. DatoCMS uses Mux behind the scenes to process the video.
The video that comes through is fairly well optimised for whatever browser is being used, and potentially for ABR with HLS; however, it still can take a fair bit of time on the initial load.
The JSON from Dato includes some potentially useful other things:
"video": {
"mp4Url": "https://stream.mux.com/6V48g3boltSf5uQRB8HnelvtPglzZzYu/medium.mp4",
"streamingUrl": "https://stream.mux.com/6V48g3boltSf5uQRB8HnelvtPglzZzYu.m3u8",
"thumbnailUrl": "https://image.mux.com/6V48g3boltSf5uQRB8HnelvtPglzZzYu/thumbnail.jpg"
},
"id": "44785585",
"blurUpThumb": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wCEAAoHBwgHBhUICAgLCgoXDhgVDhkNDh0VGhUeFxUZHSIfGxUmKzcjHh0oHRwWJDUlKDkvMjIyGSI4PTcwPCsxMi8BCgsLDg0OHBAQHDsoIig7NTs7Ozs7Ozs7Ozs7Ly8vOzs1Ozs7Ozs7Ozs1NTU7Ozs1Ozs7OzUvLzsvLy8vLy8vL//AABEIAA8AGAMBIgACEQEDEQH/xAAYAAACAwAAAAAAAAAAAAAAAAAEBQACBv/EAB4QAAICAgIDAAAAAAAAAAAAAAECABEEBQMGEiIy/8QAFwEAAwEAAAAAAAAAAAAAAAAAAgMFAP/EABsRAAIDAQEBAAAAAAAAAAAAAAECAAMRIUEE/9oADAMBAAIRAxEAPwDIYui48Y+saphApUQL2ZHNeJELTfALdGE943pl2m+gDFPJfP0qc/1JAMntKA0FYyTC9vIt2+JzrZP/2Q=="
}
With either next/image, or the more proprietary react-datocms/image, that blurUpThumb could be used as a placeholder while the full image is being loaded in the background, to improve UX, and (I believe) page-load speed / time to interactive.
Is there a way to achieve that same effect with the video instead of a file?
The usual way an ABR, HLS or DASH etc, video can start faster is by starting with one of the lower resolutions and stepping up to a higher resolution after the first couple of segments once the video is playing and there is more time to buffer.
However in your case, the example video is very short, 13 seconds so the effect is pretty minimal. Playing it on Safari on a MAC I saw the step happen at 4 seconds which is almost a 3rd through already in this case.
Short of re-encoding with lower resolution or some special codecs I think you may find this hard to beat - Mux is a pretty mature video streaming service.
The direct links to the videos above loaded and played quite quickly for me, even over a relatively low speed internet. It might be worth looking at what else your page is loading at the same time as this may be competing for bandwidth and slowing things down.
Recently I am trying to optimize the performance of a web app(React). Assuming it is somewhat heavy as it consists of Code editors, Firebase, SQL, AWS SDK, etc. So I integrated react-loadable which will lazy load the components, After that, I got this Javascript heap out of memory issue.
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory in React
After some research(From a friend), I came to know that If we keep too many lazy loadings webpack will try to bundle them parallelly It might be the cause to get a Javascript heap memory issue, To confirm that I removed all Lazy loading routes in my App and built. Now build is successful. Later as suggested by the community I increased Node heap space size and got the below insights
First I increased it to 8 GB(8192) then build is success I got build time of around 72 mins, From next time onwards I am getting around 20 mins. Then I decreased heap memory size to 4 GB(4096) and getting build success it is around 15 - 20 mins. System configuration is 2vCPU, 16GB RAM(AWS EC2 Instance r5a.large).
Next, I kept build in another system (Mac book pro, i5, 8 GB RAM, 4 Cores) Now it took 30 mins, Second time it took 20 mins
So from these data points, I got a couple of questions
Do we need to keep increasing heap space whenever we add some code? If yes what would be the average heap memory size in the community
What would be the usual configuration for build systems for these kinds of heavy apps, Why because now I am not sure whether to increase the number of cores or RAM or Heap space or Altogether something to do with our app code.
Do webpack provide any kind of solutions to avoid heap memory issue like limiting parallel processes, or any plugins?
If at all it is related with our App code, Is there any standard process to debug where it is taking memory and to optimize based on that
PS : Some people suggested to keep GENERATE_SOUCREMAP=false it got worked but we need source maps as they will be helpful in debugging production issues
Finally, I could resolve the heap out of memory issue without increasing heap memory space.
As mentioned in the question If I remove all Lazy routes build is getting successful Or I had to keep 4GB Heap space to get it success with plenty of build time. When the build is success with 4GB Heapspace I observed that nearly 8 - 10 chunk files sizes are nearly 1MB. So I analyzed all those chunks using Source map explorer. In all chunks almost same libraries code is included (In my case those are Firebase, Video player, etc which are heavy)
So In my assumption when webpack is trying to bundle all these chunks It had to build all those libraries dependency graph in every chunk which in turn causes heap memory space issue. So I used Loadable components to lazy load those libraries.
After lazy loading all those libraries all chunks files size is reduced almost by half, And Build is getting success without increasing any heap space and build time also got reduced.
After optimization if I keep build in 6vCPU , i7 System it is taking around 3 - 4 minutes and I observed that based on the number of Cores available in the system build time is getting reduced. If I keep build in 2vCPU system it is taking around 20 - 25 mins like that sometimes
Vanilla webpack has been developed for monolithic builds. It's main purpose is to take many modules and bundle them into ONE (not many). If you want to keep things modular, you want to use webpack-module-federation (WMF):
WMF allows you to develop independent packages that can easily depend on (and lazy load) each other.
These packages will automatically share dependencies between each other.
Without WMF, webpack allows none of the above.
Short Example
A library package app2 provides a component Button
An application package app1 consumes it.
When the time comes, app1 requests a component using dynamic import.
You can wrap the load using React.lazy, like so:
const RemoteButton = React.lazy(() => import("app2/Button"));
E.g., you can do this in a useEffect, or a Route.render callback etc.
app1 can use that component, once it's loaded. While loading, you might want to show a loading screen (e.g. using Suspense):
<React.Suspense fallback={<LoadingScreen />}>
<RemoteButton />
</React.Suspense>
Alternatively, instead of using lazy and Suspense, just take the promise returned from the import(...) statement and handle the asynchronous loading any way you prefer. Of course, WMF is not at all restricted to react and can load any module dynamically.
On the flip side, WMF dynamic loading must use dynamic import (i.e. import(...)), because:
non-dynamic imports will always resolve at load time (thus making it a non-dynamic dependency), and
"dynamic require" cannot be bundled by webpack since browsers have no concept of commonjs (unless you use some hacks, in which case, you will lose the relevant "loading promise").
Documentation
Even though, in my experience, WMF is mature, easy to use, and probably production ready, it's official documentation is currently only a not all too polished collection of conceptual notes. That is why I would recommend this as a "Getting Started" guide.
I'm using gatsbyjs and tried to optimize my webpage (https://www.rün.run) as much as possible. Running PageSpeed gives me some decent results. What I'm wondering is, why Script Evaluation is taking so long? My js bundle size is only 257 kb (gzipped) large.
It looks like the react hydration is taking the time. So is it because of react? Or has my DOM tree to many elements?
Direct link to PageSpeed: https://developers.google.com/speed/pagespeed/insights/?url=https%3A%2F%2Fwww.xn--rn-xka.run%2FKalista%2Fadc&tab=mobile
My goal is also to get only mobile a value of 100. How can I improve any further?
Yes, it seems that rehydrate takes this time. I have tried it with simple component which contains just a text, and still, there is some (quite long for that case) time for main-thread work & JavaScript execution time.
And the same thing if you will test https://store.gatsbyjs.org/
But anyway, even due to this issue results with gatsby really great
I am a building a educational app that has around 1100 SVGs. They very small per svg like 800bytes. I am quite new to reactjs.
For each SVG I have an audio.
Therefore
1100 svgs
1100 audio clips
I am using create-react-app.
Intend using howlerJS to ensure audio files are cached.
Also lazy load the svgs.
Does create-react-app cache the svgs?
Should I change my approach? Maybe merge svgs into a sprite..
Should I merge mp3s and play parts of the audio as needed..
My main thing is that audio and svgs get installed when user installs PWA.
Any feedback will be appreciated
There is no magic bullet to your problem, but here are some key points to consider:
It won't take long for you to make a request and get a response for each and every pair of svg/audio that user selects (on demand, not 1100 altogether).
Not knowing your app's logic and UX, I can extrapolate that loading 3MB of SVG and 10MB of audio will be more noticeable, than waiting for on-demand request/response of 11KB.
As a background job, try to load as many files to upfront as you can, while there are no running requests from user.
Cache everything that was already loaded to avoid repeated requests
Try to optimize your sources in the first place, since minimal changes to an individual file will lead to a major impact on scale of x1000. A good place to start is checking your floating point in SVG: try to avoid something like <path d="M33.6286316,13.9605932... and keep it as simple as your precision allows you to. In most cases <path d="M33.62,13.96... will do the job.
I'm working in an AngularJS app that uses webpack for bundling the resources. Currently we are creating a single app.js file that contains the CSS as well. The size of the app.js is around 6MB. If we break the app.js into multiple chunks does that improve the page performance. My colleagues convinces me if we break the single JS file into 2 or 3 then the page load time will increase twice or thrice. Is that really true? I remember reading some where having a single file is better than multiple. I don't really remember the reasons now. Do I really need to break the app.js file for page performance? or what other options I can apply here?
A single file is better because it requires fewer connections (means less overhead), but this is really negligible when talking about < 5 files. When splitting parts of your file you do gain the ability to cache the files separately, which is often a great win. Therefore I'd recommend splitting the files in logically cachable sections (like vendor code and custom code).
Also note that if the client and server support http/2, the fewer connections reason is also gone since http/2 supports connection re-use.
Note that there is no real difference for the initial load time, since in that case all files will need to be downloaded anyway.
A single file will usually mean better performance. You should also ensure that this file is properly cached (on the browser side) and gzipped when served by your webserver.
I did a practical test in Chrome (Mac 54.0.2840.98 (64-bit)) to prove whether there is really a performance gain in breaking a huge JS file into many. I created a 10MB js file and made three copies of it. Concatenated all the 3 copied and created a 30MB file. I measured the time it took for the single file that is referenced using a normal script tag at the page bottom and it's around 1 minute. Then I referenced the 3 10MB script files one after other and it took nearly 20seconds to load everything. So there is a really a performance gain in breaking a huge JS file into many. But there is a limit in the no. of files the browser can download parallely.