How to reduce react app build time and understanding behaviour of webpack when bundling - reactjs

Recently I am trying to optimize the performance of a web app(React). Assuming it is somewhat heavy as it consists of Code editors, Firebase, SQL, AWS SDK, etc. So I integrated react-loadable which will lazy load the components, After that, I got this Javascript heap out of memory issue.
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory in React
After some research(From a friend), I came to know that If we keep too many lazy loadings webpack will try to bundle them parallelly It might be the cause to get a Javascript heap memory issue, To confirm that I removed all Lazy loading routes in my App and built. Now build is successful. Later as suggested by the community I increased Node heap space size and got the below insights
First I increased it to 8 GB(8192) then build is success I got build time of around 72 mins, From next time onwards I am getting around 20 mins. Then I decreased heap memory size to 4 GB(4096) and getting build success it is around 15 - 20 mins. System configuration is 2vCPU, 16GB RAM(AWS EC2 Instance r5a.large).
Next, I kept build in another system (Mac book pro, i5, 8 GB RAM, 4 Cores) Now it took 30 mins, Second time it took 20 mins
So from these data points, I got a couple of questions
Do we need to keep increasing heap space whenever we add some code? If yes what would be the average heap memory size in the community
What would be the usual configuration for build systems for these kinds of heavy apps, Why because now I am not sure whether to increase the number of cores or RAM or Heap space or Altogether something to do with our app code.
Do webpack provide any kind of solutions to avoid heap memory issue like limiting parallel processes, or any plugins?
If at all it is related with our App code, Is there any standard process to debug where it is taking memory and to optimize based on that
PS : Some people suggested to keep GENERATE_SOUCREMAP=false it got worked but we need source maps as they will be helpful in debugging production issues

Finally, I could resolve the heap out of memory issue without increasing heap memory space.
As mentioned in the question If I remove all Lazy routes build is getting successful Or I had to keep 4GB Heap space to get it success with plenty of build time. When the build is success with 4GB Heapspace I observed that nearly 8 - 10 chunk files sizes are nearly 1MB. So I analyzed all those chunks using Source map explorer. In all chunks almost same libraries code is included (In my case those are Firebase, Video player, etc which are heavy)
So In my assumption when webpack is trying to bundle all these chunks It had to build all those libraries dependency graph in every chunk which in turn causes heap memory space issue. So I used Loadable components to lazy load those libraries.
After lazy loading all those libraries all chunks files size is reduced almost by half, And Build is getting success without increasing any heap space and build time also got reduced.
After optimization if I keep build in 6vCPU , i7 System it is taking around 3 - 4 minutes and I observed that based on the number of Cores available in the system build time is getting reduced. If I keep build in 2vCPU system it is taking around 20 - 25 mins like that sometimes

Vanilla webpack has been developed for monolithic builds. It's main purpose is to take many modules and bundle them into ONE (not many). If you want to keep things modular, you want to use webpack-module-federation (WMF):
WMF allows you to develop independent packages that can easily depend on (and lazy load) each other.
These packages will automatically share dependencies between each other.
Without WMF, webpack allows none of the above.
Short Example
A library package app2 provides a component Button
An application package app1 consumes it.
When the time comes, app1 requests a component using dynamic import.
You can wrap the load using React.lazy, like so:
const RemoteButton = React.lazy(() => import("app2/Button"));
E.g., you can do this in a useEffect, or a Route.render callback etc.
app1 can use that component, once it's loaded. While loading, you might want to show a loading screen (e.g. using Suspense):
<React.Suspense fallback={<LoadingScreen />}>
<RemoteButton />
</React.Suspense>
Alternatively, instead of using lazy and Suspense, just take the promise returned from the import(...) statement and handle the asynchronous loading any way you prefer. Of course, WMF is not at all restricted to react and can load any module dynamically.
On the flip side, WMF dynamic loading must use dynamic import (i.e. import(...)), because:
non-dynamic imports will always resolve at load time (thus making it a non-dynamic dependency), and
"dynamic require" cannot be bundled by webpack since browsers have no concept of commonjs (unless you use some hacks, in which case, you will lose the relevant "loading promise").
Documentation
Even though, in my experience, WMF is mature, easy to use, and probably production ready, it's official documentation is currently only a not all too polished collection of conceptual notes. That is why I would recommend this as a "Getting Started" guide.

Related

NextJS reduce First Load of shared JS

In a bigger project, I have a fairly high amount of used third-party libraries, including firebase, redux, etc and some more specific ones (but only used on several pages) like konvaJS, jimp, ....
I just migrated recently to nextJS to speed up the website & maybe allow SSR. However, after migrating, the Lighthouse Speedtest dropped compared to the Pure React Version. The main problem seems to be the shared JS first Load bundles.
After some optimizing, including lazy loading bigger Components with dynamic() & modules with await import(), I managed to compress the shared first load JS bundles by half, but they are still around 400KB which is way too heavy. I guess heavy modules like firebase are included there as well, because it is needed basically everywhere in the App.
I also tried to analyze the dependencies with #next/bundle-analyzer. But the Visualization is not really easy to interpret. Is it true that it also lists modules that are lazy loaded? And in addition, I have some dependency packed multiple times in different bundles. Last but not least, the bundles visualized by the analyzer do not match the names of the build output.
Any help to reduce or understand the process better is well appreciated. I am using current React & Next.js versions.
Edit: This is the composition of the shared JS bundles after build:
Edit2: I am still confused about the output of bundle-analyzer. The module jspdf for instance, is only used in one page / component and lazy loaded. Is it correct that it is still visible in the analyzer? It does not have any impact on the shared JS bundle size.
Edit3: I managed to lazy load my firebase modules (which are crucial for my App), so I saved over 200KB shared JS size. Still hoping to cut down the remaining. Btw, the bundle analyzers output did not really change.
Are you using FontAwesome?
We were able to reduce our "First Load JS shared by all" from
504kb down to 276kb
by removing FontAwesome dependency and downloading the individual .svg icons
directly.
Before
After

Why is lazy loading not the default for React?

I am just working through a React course and the current topic is lazy loading.
I was wondering why lazy loading is not the default and taken care of by React without forcing the developer to write repetitive code?
Example:
In the course, we want to load the Posts component lazily, because in this component we only render it when on a certain route. Therefore he replaces
import Posts from './containers/posts'
with
const Posts = React.lazy(() => import('./containers/posts'))
and where it is used he replaces
<Route path='/posts' component={Posts}>
with
<Route
path='/posts'
render={() => (
<Suspense>
<Posts>
</Suspense)}
>
so basically just wrapping the component we want to lazyload in a certain React component.
React is not handling the lazy loading by itself but relies on the functionality of the underlying bundler (Webpack). In particular, the bundler converts the import() statements (which is the proposal for the dynamic import) to something which could be processed by the majority of the browsers. Thus, to enforce the underlying building process to load a specific module lazy you also have to use import().
In general, splitting into multiple chunks (that's what is happening on build when lazy loading is used) might be good (e.g. for mobile users, as mentioned by #Prashant Adhikari) but also leads to additional delays while using the page because the files have to be transferred over the network one by one first. Thus, it's also not an option to have lazy loading everywhere. In fact, this issue might disappear in the future (esp. with some "intelligent" preload mechanism in HTTP/2) but for the majority of applications the best practice over the last years seems to be generating a fat JS file for application-related scripts plus a vendor.js for the dependencies.
However, introduction of lazy loading might be reasonable in order to minimize the page loading time. Esp. for bigger applications (like Stack Overflow) it makes sense to preload the modules require to depict the primary content (e.g. Questions) and lazy load the less frequent pages (e.g. User settings).
React lazy : - It is now fully integrated into core react library itself.
Firstly, bundling involves aligning our code components in progression and putting them in one javascript chunk that it passes to the browser; but as our application grows, we notice that bundle gets very cumbersome in size. This can quickly make using your application very hard and especially slow. With Code splitting, the bundle can be split to smaller chunks where the most important chunk can be loaded first and then every other secondary one lazily loaded.
Also, while building applications we know that as a best practise consideration should be made for users using mobile internet data and others with really slow internet connections. We the developers should always be able to control the user experience even during a suspense period when resources are being loaded to the DOM.
With lazy loading you can customise and load lazily the component which are not needed for the current screen.
The question about having lazy loading natively might be answered by some more experienced guy or you can raise an issue on gitHub.
It's a relatively new feature released in the last year with React 16.6.
If there was a way to enable lazy loading for all existing projects without rewriting code, they would have included it. The fact they did not means it isn't compatible with all existing patterns.

Why is react lazy adding time to first load

I have a small but fun to develop app. It was a fast experiment to learn a bit more about redux and react and I got to the point that I consider the app mature enough to start learning about optimization.
I did some pure component optimization attemts, but they didn't improve the time to first load, so I move on.
The next optimization I tried was using react lazy for lazy load some components that I don't need at first time. For example, I have an error component that I only need if I have to show an unlikely error, so that is what I did, and surprisingly (and according to lighthouse) all the first time metrics (time to first interactive, time to first meaningful paint etc) got way worse.
Here is an screenshot of the report before trying to use react lazy:
As you can see, from the performance point of view there was not much to improve, but as I was trying to learn modern react, I tried anyway. Here is the best I have been able to get using react lazy to split one component:
As you can see, way worse. The problems it detected were not all related to caching policies, here they are:
Seems that the main thread is getting busier to parse all the javascript. It makes no sense to me, because going to Chrome dev-tools and inspecting on detail the network requests (on the performance tab) the resulting bundles are downloaded in parallel. However the bundles on both versions are almost the same size, except that the split version of the application has 5 chunks instead of 2:
First bundle without code split
URL
bundle.js
Duration
45.62 ms
Request Method
GET
Priority
High
Mime Type
application/javascript
Encoded Data
6.5 KB
Decoded Body
31.0 KB
First bundle with react lazy split
URL
bundle.js
Duration
28.63 ms
Request Method
GET
Priority
High
Mime Type
application/javascript
Encoded Data
7.1 KB
Decoded Body
33.7 KB
First downloaded chunk:
Network request
URL
0.chunk.js
Duration
255.83 ms
Request Method
GET
Priority
High
Mime Type
application/javascript
Encoded Data
579 KB
Decoded Body
2.7 MB
First chunk with react lazy split(is labeled 5 but it is actually the first):
URL
5.chunk.js
Duration
276.40 ms
Request Method
GET
Priority
High
Mime Type
application/javascript
Encoded Data
559 KB
Decoded Body
2.6 MB
My conclusion is that react lazy introduces a significant overhead that only pays off if the size of the loaded components is big enough.
HoweVer, does that mean that big aplications can never score high on first paint ?
I made some bigger apps with VUE that got almost 90 on performance, so I'm pretty sure I'm doing something wrong here.
Something to mention is that the first screenshot is being served from github pages while the second is being served locally, but that should not influence the problem at hand, should it ?
The code for the non split version of the app is publicly available here: https://github.com/danielo515/itunes
The biggest time consumption in the “Script Evaluation” 1.672ms. So, try reducing this time.
Analyze size of JavaScript, which libraries you can replace by the smaller
version or use pure JavaScript. If you use CRA try Analyzing the Bundle
Size or webpack-bundle-analyzer. For example instead of lodash
maybe you can use smaller library lodash-es;
Use server-side rendering. Consider use loadable-components
(advice from React doc). But if you use slow server (or low level of cashing) has possibility increase a value of "Time to First Byte";
Use Pre-Rendering into Static HTML Files.
Also, a very useful tool for web-page speed analyze is webpagetest.org.

Initial page load performance for an angularjs app

I'm working in an AngularJS app that uses webpack for bundling the resources. Currently we are creating a single app.js file that contains the CSS as well. The size of the app.js is around 6MB. If we break the app.js into multiple chunks does that improve the page performance. My colleagues convinces me if we break the single JS file into 2 or 3 then the page load time will increase twice or thrice. Is that really true? I remember reading some where having a single file is better than multiple. I don't really remember the reasons now. Do I really need to break the app.js file for page performance? or what other options I can apply here?
A single file is better because it requires fewer connections (means less overhead), but this is really negligible when talking about < 5 files. When splitting parts of your file you do gain the ability to cache the files separately, which is often a great win. Therefore I'd recommend splitting the files in logically cachable sections (like vendor code and custom code).
Also note that if the client and server support http/2, the fewer connections reason is also gone since http/2 supports connection re-use.
Note that there is no real difference for the initial load time, since in that case all files will need to be downloaded anyway.
A single file will usually mean better performance. You should also ensure that this file is properly cached (on the browser side) and gzipped when served by your webserver.
I did a practical test in Chrome (Mac 54.0.2840.98 (64-bit)) to prove whether there is really a performance gain in breaking a huge JS file into many. I created a 10MB js file and made three copies of it. Concatenated all the 3 copied and created a 30MB file. I measured the time it took for the single file that is referenced using a normal script tag at the page bottom and it's around 1 minute. Then I referenced the 3 10MB script files one after other and it took nearly 20seconds to load everything. So there is a really a performance gain in breaking a huge JS file into many. But there is a limit in the no. of files the browser can download parallely.

Guide to acceptable "Live Bytes" of iOS6 app using MKMapView

I am looking to ensure my app does not consume too much memory on what are still fairly resource constrained devices. Several days ago I was using Instruments to determine how much memory my app was using and I was getting around 4-8 megabytes for Live Bytes. Today I run Instruments again and I am up around 30-35 megabytes for Live Bytes. I don't believe I have made any significant changes to my code between these two times.
My app uses an MKMapView with a custom tile overlay. I put off updating XCode for a fairly long time, and so I suspect that the difference may be that my iOS simulator was still using Google as opposed to Apple maps a few days ago, until I upgraded XCode.
As a small test, I created a new test app that just has an MKMapView, nothing else, and ran Instruments on it. It is common for the Live Bytes of this app to be on the order of 50-90 megabytes, even though it has no custom code whatsoever - I just drag n' dropped the MKMapView on.
Whether this is intentional on Apple's behalf for the new maps to use this much memory I do not know. Perhaps the map tiles are shared across apps and this is fine. Either way though, it complicates coming up with a reasonable approximation for how many Live Bytes I can safely use, given that most earlier suggestions are on the order of 5-20MB, and Apple's MKMapView consumes 50-90MB on its own.
Is there another useful metric I can go by failing Live Bytes being any use now?
Edit: looks like for others this is a legitimate memory management problem and causing app crashes: iOS6 MKMapView using a ton of memory, to the point of crashing the app, anyone else notice this?

Resources