Performance: Single import vs multiple in background service worker - firefox-addon-webextensions

Considering caching in a browser extension and the stop/restarting of background service worker (or event page), which of the following would perform better?
Importing One large-ish module with multiple classes
Multiple smaller modules (4-5)
Most of the classes are used in service workers. Some of the classes are also used elsewhere as well (i.e. browser action popup an options page).
Multiple files provide a cleaner dependency structure. On the other hand, multiple file access may use more resources.
Example:
// background.js
import {one} from './one.js';
import {two} from './two.js';
import {three} from './three.js';
import {four} from './four.js';
// popup.js
import {one} from './one.js';
import {two} from './two.js';
// options.js
import {one} from './one.js';
import {four} from './four.js';
// ----- vs -----
// background.js
import {one, two, three, four} from './one.js';
// popup.js
import {one, two} from './one.js';
// options.js
import {one, four} from './one.js';

In Chrome you can use devtools timeline (JS profiler) to see the actual impact. There's also chrome://tracing that shows browser internals. A similar tool exists in Firefox.
In short, that 1 millisecond you can gain due to bundling a few scripts may be entirely negligible on a modern computer with SSD and enough memory for the OS to cache the files in-memory. You would probably have to import dozens of scripts to see a pronounced difference.
That said, one bundle would perform better on the much slower HDD that is still widely used, so you may want to use modules in the source code, but then compile it into a bundle, one per each entry e.g. for a content script, for the background script, etc.
To put things in perspective, the wake-up process itself is much more heavy in comparison:
When the background script starts, it takes at least 50ms to create the JS process and set up the environment.
Then your scripts are loaded from disk - fast on SSD or when cached by the OS, slow otherwise.
JS interpreter parses and compiles the files (extensions don't use code cache) - duration is proportional to the amount of code, 100ms per 1MB on average probably.
Note that there's no dedicated caching for extensions in Chrome, neither for reading the script from the disk, nor for code parsing and compilation.
If your extension restarts often, it will negatively impact the overall browser performance and it may easily exceed any positive gains due to lesser memory consumption between the runs, which you can also see in a performance profiler or by measuring the power consumption delta when using a semi-persistent background script.

Related

How to reduce react app build time and understanding behaviour of webpack when bundling

Recently I am trying to optimize the performance of a web app(React). Assuming it is somewhat heavy as it consists of Code editors, Firebase, SQL, AWS SDK, etc. So I integrated react-loadable which will lazy load the components, After that, I got this Javascript heap out of memory issue.
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory in React
After some research(From a friend), I came to know that If we keep too many lazy loadings webpack will try to bundle them parallelly It might be the cause to get a Javascript heap memory issue, To confirm that I removed all Lazy loading routes in my App and built. Now build is successful. Later as suggested by the community I increased Node heap space size and got the below insights
First I increased it to 8 GB(8192) then build is success I got build time of around 72 mins, From next time onwards I am getting around 20 mins. Then I decreased heap memory size to 4 GB(4096) and getting build success it is around 15 - 20 mins. System configuration is 2vCPU, 16GB RAM(AWS EC2 Instance r5a.large).
Next, I kept build in another system (Mac book pro, i5, 8 GB RAM, 4 Cores) Now it took 30 mins, Second time it took 20 mins
So from these data points, I got a couple of questions
Do we need to keep increasing heap space whenever we add some code? If yes what would be the average heap memory size in the community
What would be the usual configuration for build systems for these kinds of heavy apps, Why because now I am not sure whether to increase the number of cores or RAM or Heap space or Altogether something to do with our app code.
Do webpack provide any kind of solutions to avoid heap memory issue like limiting parallel processes, or any plugins?
If at all it is related with our App code, Is there any standard process to debug where it is taking memory and to optimize based on that
PS : Some people suggested to keep GENERATE_SOUCREMAP=false it got worked but we need source maps as they will be helpful in debugging production issues
Finally, I could resolve the heap out of memory issue without increasing heap memory space.
As mentioned in the question If I remove all Lazy routes build is getting successful Or I had to keep 4GB Heap space to get it success with plenty of build time. When the build is success with 4GB Heapspace I observed that nearly 8 - 10 chunk files sizes are nearly 1MB. So I analyzed all those chunks using Source map explorer. In all chunks almost same libraries code is included (In my case those are Firebase, Video player, etc which are heavy)
So In my assumption when webpack is trying to bundle all these chunks It had to build all those libraries dependency graph in every chunk which in turn causes heap memory space issue. So I used Loadable components to lazy load those libraries.
After lazy loading all those libraries all chunks files size is reduced almost by half, And Build is getting success without increasing any heap space and build time also got reduced.
After optimization if I keep build in 6vCPU , i7 System it is taking around 3 - 4 minutes and I observed that based on the number of Cores available in the system build time is getting reduced. If I keep build in 2vCPU system it is taking around 20 - 25 mins like that sometimes
Vanilla webpack has been developed for monolithic builds. It's main purpose is to take many modules and bundle them into ONE (not many). If you want to keep things modular, you want to use webpack-module-federation (WMF):
WMF allows you to develop independent packages that can easily depend on (and lazy load) each other.
These packages will automatically share dependencies between each other.
Without WMF, webpack allows none of the above.
Short Example
A library package app2 provides a component Button
An application package app1 consumes it.
When the time comes, app1 requests a component using dynamic import.
You can wrap the load using React.lazy, like so:
const RemoteButton = React.lazy(() => import("app2/Button"));
E.g., you can do this in a useEffect, or a Route.render callback etc.
app1 can use that component, once it's loaded. While loading, you might want to show a loading screen (e.g. using Suspense):
<React.Suspense fallback={<LoadingScreen />}>
<RemoteButton />
</React.Suspense>
Alternatively, instead of using lazy and Suspense, just take the promise returned from the import(...) statement and handle the asynchronous loading any way you prefer. Of course, WMF is not at all restricted to react and can load any module dynamically.
On the flip side, WMF dynamic loading must use dynamic import (i.e. import(...)), because:
non-dynamic imports will always resolve at load time (thus making it a non-dynamic dependency), and
"dynamic require" cannot be bundled by webpack since browsers have no concept of commonjs (unless you use some hacks, in which case, you will lose the relevant "loading promise").
Documentation
Even though, in my experience, WMF is mature, easy to use, and probably production ready, it's official documentation is currently only a not all too polished collection of conceptual notes. That is why I would recommend this as a "Getting Started" guide.

Why is lazy loading not the default for React?

I am just working through a React course and the current topic is lazy loading.
I was wondering why lazy loading is not the default and taken care of by React without forcing the developer to write repetitive code?
Example:
In the course, we want to load the Posts component lazily, because in this component we only render it when on a certain route. Therefore he replaces
import Posts from './containers/posts'
with
const Posts = React.lazy(() => import('./containers/posts'))
and where it is used he replaces
<Route path='/posts' component={Posts}>
with
<Route
path='/posts'
render={() => (
<Suspense>
<Posts>
</Suspense)}
>
so basically just wrapping the component we want to lazyload in a certain React component.
React is not handling the lazy loading by itself but relies on the functionality of the underlying bundler (Webpack). In particular, the bundler converts the import() statements (which is the proposal for the dynamic import) to something which could be processed by the majority of the browsers. Thus, to enforce the underlying building process to load a specific module lazy you also have to use import().
In general, splitting into multiple chunks (that's what is happening on build when lazy loading is used) might be good (e.g. for mobile users, as mentioned by #Prashant Adhikari) but also leads to additional delays while using the page because the files have to be transferred over the network one by one first. Thus, it's also not an option to have lazy loading everywhere. In fact, this issue might disappear in the future (esp. with some "intelligent" preload mechanism in HTTP/2) but for the majority of applications the best practice over the last years seems to be generating a fat JS file for application-related scripts plus a vendor.js for the dependencies.
However, introduction of lazy loading might be reasonable in order to minimize the page loading time. Esp. for bigger applications (like Stack Overflow) it makes sense to preload the modules require to depict the primary content (e.g. Questions) and lazy load the less frequent pages (e.g. User settings).
React lazy : - It is now fully integrated into core react library itself.
Firstly, bundling involves aligning our code components in progression and putting them in one javascript chunk that it passes to the browser; but as our application grows, we notice that bundle gets very cumbersome in size. This can quickly make using your application very hard and especially slow. With Code splitting, the bundle can be split to smaller chunks where the most important chunk can be loaded first and then every other secondary one lazily loaded.
Also, while building applications we know that as a best practise consideration should be made for users using mobile internet data and others with really slow internet connections. We the developers should always be able to control the user experience even during a suspense period when resources are being loaded to the DOM.
With lazy loading you can customise and load lazily the component which are not needed for the current screen.
The question about having lazy loading natively might be answered by some more experienced guy or you can raise an issue on gitHub.
It's a relatively new feature released in the last year with React 16.6.
If there was a way to enable lazy loading for all existing projects without rewriting code, they would have included it. The fact they did not means it isn't compatible with all existing patterns.

Reduce Meteor app initial load time

As we've been developing our meteor (with angular) application, we've noticed that the load initial load times (no cache) are very slow: ~10 seconds. The main culprit seems to be the modules.js file, which holds all our node_modules, as it's around 2MB now.
We're importing modules in only the needed files, but they're still all being loaded at the start since we have to import those files in our main.js file so that angular can see the controller in the file.
I'm following the project structure outlined here: https://guide.meteor.com/structure.html
Is there something obvious we've missed? Any tips on how to reduce that load time.
Publications can also slow down the initial load process. By default, Meteor projects include the autopublish package, which publishes everything—that means it copies everything in the database to the client. If you have accrued a lot of data and autopublish is in effect, then your load time will suffer.

Initial page load performance for an angularjs app

I'm working in an AngularJS app that uses webpack for bundling the resources. Currently we are creating a single app.js file that contains the CSS as well. The size of the app.js is around 6MB. If we break the app.js into multiple chunks does that improve the page performance. My colleagues convinces me if we break the single JS file into 2 or 3 then the page load time will increase twice or thrice. Is that really true? I remember reading some where having a single file is better than multiple. I don't really remember the reasons now. Do I really need to break the app.js file for page performance? or what other options I can apply here?
A single file is better because it requires fewer connections (means less overhead), but this is really negligible when talking about < 5 files. When splitting parts of your file you do gain the ability to cache the files separately, which is often a great win. Therefore I'd recommend splitting the files in logically cachable sections (like vendor code and custom code).
Also note that if the client and server support http/2, the fewer connections reason is also gone since http/2 supports connection re-use.
Note that there is no real difference for the initial load time, since in that case all files will need to be downloaded anyway.
A single file will usually mean better performance. You should also ensure that this file is properly cached (on the browser side) and gzipped when served by your webserver.
I did a practical test in Chrome (Mac 54.0.2840.98 (64-bit)) to prove whether there is really a performance gain in breaking a huge JS file into many. I created a 10MB js file and made three copies of it. Concatenated all the 3 copied and created a 30MB file. I measured the time it took for the single file that is referenced using a normal script tag at the page bottom and it's around 1 minute. Then I referenced the 3 10MB script files one after other and it took nearly 20seconds to load everything. So there is a really a performance gain in breaking a huge JS file into many. But there is a limit in the no. of files the browser can download parallely.

Importing a project from clearcase source control to RTC source control

I'm attempting to import a clearcase project into rtc source control and I'm following this tutorial :
https://jazz.net/library/article/50
Is this tutorial sufficient ? When code is checked into clearcase will it also be checked into rtc source control automatically?
Any common pitfalls / online tutorials welcome
Honestly, for a couple of baselines, I simply create an import RTC stream (ie, a stream dedicated for imports), and I manually copy a few baselines, selected from a dynamic view (which has a config spec you can easily change in order to select the appropriate baseline)
Whatever the tool you end up using for this import, the common pitfalls are:
import directly in an RTC Stream used for development: it is best to isolate the import in a dedicated Stream, which will allow you to start working on one of the imported baselines in an RTC Dev Stream, while completing further round of import later in time in the Import Stream.
import all the history (instead of only a few selected baselines)
import without cleaning first (ie you might realize you stored in ClearCase quite a few binaries, libraries or other generated files that you might want to ignore, through a .jazzignore file for the import)
import without refactoring the components: a UCM ClearCase component might have been use over the year as one giant bucket for multiple projects codebases.
The import to another VCS is the good time to split it into several smaller components.
Shutting down ClearCase completely after the import: since you don't import all the history and not all the owner for each versions, you might need to consult back from time to time the history stored in ClearCase.
Don't forget to lock the vobs though, to ensure a read-only access.

Resources