What is the real weight of npm packages? - reactjs

I want to add an image carousel to a profile page, and allow the user to view the images in fullscreen mode thanks to a modal. It means - if I'm not mistaken - that the carousel will be imported twice: once in the profile component, and another one on top of it when the modal opens.
It is a heavy process, and I'm afraid of performance issues. I thought about creating my own carousel, but there are already many packages that perfectly deal with hand gestures on mobile, etc. However, their weight is sometimes dreadful.
For instance, the library react-awesome-slider - which seems perfect - weights 666kb! However, on Bundlephobia, it is supposed to only weight 36.2kb or 8.2kb gzipped. Who is right?
Will react-awesome-slider weight 2*666kb, 2*36.2kb or 2*8.2kb in my final bundle? What is the maximum weight recommended to keep a high level of fluidity/performance?

This looks like premature optimization. Don't care about your bundle size in this manner - if 3kb gzip is a lot or not. Simply, if you need that library use it. You will understand that having library for summarizing of two numbers might not be necessary before any bundle size issue will appear.
Bundle size that you always care about is the gzipped value, that's what client receives and has to "download" - that takes time. But as you can imagine, downloading 30kb on your computer at home is not an issue. On old device in middle of Blairwitch forest it might be.
Also, it gets send to client once per page/application so if your modal has it and your page has it it won't be included twice. Imagine having some library like Lodash, which is big and used (if I exaggerate) in every function, do you think it be included 100 times?
Try to optimize the user experience in terms of ui/ux, that will be the first one user will quit your page for, not that he had to download 30kb of carousel, he will not even notice!

Related

React error in render/flush: RangeError in flush RangeError: Maximum call stack size exceeded

I've been in the process of rewriting an old AngularJS app in React (actually it's using preact, chosen by the developer who started this project initially).
This app handles large deeply nested objects that get be displayed via Material UI accordions and tables. The data is more WIDE than deep, but at any rate, React has trouble rendering it all without this RangeError.
I've been dancing with this issue for a while now and have avoided it by strategically managing accordions and not rendering data for accordions that are not open.
I've commonly seen this reported as a recursion issue, and I've carefully reviewed the ode to confirm there is no recursion involved. Plenty of iteration, but no recursion.
Please note the stack trace, it's hitting this in the flush() function, which is not in our application code, but in the Chrome debugger VM. I've set breakpoints and it appears to be something related to DOM operations as the objects being flushed are React elements. Here's a code snippet from the point where this error is hit:
function flush(commit) {
const {
rootId,
unmountIds,
operations,
strings,
stats
} = commit;
if (unmountIds.length === 0 && operations.length === 0) return;
const msg = [rootId, ...flushTable(strings)];
if (unmountIds.length > 0) {
msg.push(MsgTypes.REMOVE_VNODE, unmountIds.length, ...unmountIds);
}
msg.push(...operations); <--- error occurs here when operations.length too long
And the stack trace logged when error occurs:
VM12639:1240 Uncaught (in promise) RangeError: Maximum call stack size exceeded
at flush (<anonymous>:1240:8)
at Object.onCommit (<anonymous>:3409:19)
at o._commit.o.__c (<anonymous>:3678:15)
at QRet.Y.options.__c (index.js:76:17)
at Y (index.js:265:23)
at component.js:141:3
at Array.some (<anonymous>)
at m (component.js:220:9)
The error is occurring if operations is too large. Normally it will be anywhere from a dozen or so in length up to maybe 3000, depending on what's going on, but when I try to load our page displaying the wide/deep nested object this number is more like 150000, which apparently is choking the spread operator.
My sense is that this type of app is a challenge for React. I cannot think of another example of a React app that displays data the way we do with this. If anyone here has experience with this sort of dataset and can offer suggestions as to how to make this work, please share.
My guess is I'm going to need to somehow break this object up into smaller chunks that represent smaller updates, but I'm posting here in case there's something I can learn.
It looks similar to this open issue on the React repo, only it happens in a different place (also in dev tools). Might be worth reporting your issue there too. So probably React is otherwise "fine" rendering this amount of elements, though you'll inevitably get slow performance.
Likely the app is just displaying too much data, or doing it inefficiently.
but when I try to load our page displaying the wide/deep nested object this number is more like 150000, ...
150000 DOM operations is a really high amount. Either your app really does display a whole lot of elements, or the old AngularJS app had too many wrapper elements and these were preserved. Since you mention it concerns data tables, it's probably the first reason. In any case complex applications always need some platform specific optimization.
If you can give an idea about the intended use case, or even better, share (parts of) the code, that would help others to give more targeted advice. Are the 150k operations close to what would happen in real world usage, or is it just a very inflated number for stress testing? Do you see any other performance regressions, compared to the Angular app, with very complex objects? How many tables are on the screen at a time?
A few hundreds of visible elements on the screen already gets quite cramped. So where would all these extra operations coming from? Either you're loading a super long page of which a user can only see a few percent at the same time, or the HTML structure is unnecessarily deeply nested.
Suggested performance improvements
I wouldn't say React isn't suitable for really large amounts of data, but you do need to watch out for some things yourself. React is only your vehicle to apply changes to the DOM. Putting a large amount of elements in the DOM is always going to lead to decreased performance, and is something you usually want to avoid.
In this case you could consider whether it's necessary to display all the table's data, which is probably the bulk of the operations. Using pagination would resolve the problem, and might even make it more user friendly.
If that's not an option, you maybe can use a library like react-lazyload to show/hide the items as they enter/exit the visible part of the table. To achieve this, use their unmountIfInvisible prop. You can then replace a complex data row with a single element that has the same height. The last is important to preserve the scroll height.
<LazyLoad
height={100}
offset={100}
unmountIfInvisible
placeholder={<tr height={100}/>}
>
<MyComplexDataRow />
</LazyLoad>
This way your data table never consists of much more complex elements than can be seen in the viewport. You probably need to tune the offset a bit so that it's always ready in time as it's benig scrolled.

Largest contententful paint (LCP) on lighthouse is a p tag. (Using gatsby)

I don't know why my LCP would be a p tag, and I have no idea what I would do to reduce the size of it. Sometimes it gets up to 2.6s and gives a yellow rating(instead of green).
This is the p tag. All of those classes are bootstrap classes.
<p className="text-center mb-md-5 mt-0 mb-5">{aboutText}</p>
This is the variable aboutText
const aboutText = `Suddenly Magazine highlights the uniqueness of Saskatchewan, and its sudden rise in popularity and growth mentioned in publications such as USA Today and the New York Times.
Advertorials and Articles focus on its rare & particular tourism, its passionate sports, its character, and the prosperous opportunity for businesses and artists influenced by a Saskatchewan setting.
It is centred in Saskatoon, but contributors range from Lac La Ronge in the North, to provincial boundaries east and west, to the Outlaw Caves near the US border.`
The domain is https://suddenlysask.com
So why is your LCP a p tag?
Its only on mobile a p tag, and here take a look at the mobile size.
Its clearly to see that the p tag takes the most place here.
You could try to make the image bigger on mobile devices, so lighthouse will count the image as the LCP.
Another solution is to split up your p tag into 2 smaller p tags
Another solution could be (witch is not recommended) to cut your p tag slightly out of the viewport because...
The size of the element reported for Largest Contentful Paint is
typically the size that's visible to the user within the viewport. If the
element extends outside of the viewport, or if any of the element is
clipped or has non-visible overflow, those portions do not count
toward the element's size.
I guess your bad result comes from this line here:
<link data-react-helmet="true" rel="preload" href="https://fonts.googleapis.com/css?family=Montserrat|Helvetica+Neue|Helvetica|Arial&display=swap">
Why does it take up to 2.6 sec?
Here is what i guess:
Loading the google font can take its time and its not guaranteed that it loads always exactly the same time, so when the font is loaded it will swap your fonts and that means the p tag swaps. That means that the p tag with the new font is treated as new LCP.
For testing purposes you could try to remove the link and see if it affects your speed score at your LCP
At the end, i would split the paragraph up into 2 smaller paragraphs so that the image is the LCP. i think its the easiest solution.
People seems to completely misunderstand the purpose of the Largest Contentful Paint metric. It is designed to show you when the majority of the above the fold content is ready.
What item is the Largest Contentful Paint is not as important as when it occurs. What item is only useful in determining what could be slowing your page down.
It is the main metric in determining when 'above the fold' content is painted sufficiently that an end user would see the page as "complete" (this is perceived completeness, there can still be stuff loading lower down the page / in the background).
The suggestions of splitting the paragraph, wrapping it in a div, making it taller etc. etc. serve no purpose, they just shift the LCP onto something else (possibly) making your score look better but not actually fixing the problem.
What you want to do is optimise the initial content on the page.
For this you want to serve just the 'above the fold' HTML along with the CSS and JS required for above the fold content.
Then you serve everything else.
This article is a good introduction to critical JS and CSS https://www.smashingmagazine.com/2015/08/understanding-critical-css/
However in a nutshell inlining critical CSS and JS means that the CSS and JS required to render the initial content on the page should be inline within the HTML. Now I am guessing with something like Gatsby you would inline the critical JS that renders the above the fold content, above the fold CSS etc. but the principle is the same.
The key is that the above the fold content should all be served (except for non vector images) within the HTML so that there is no round-trip time waiting for CSS files, JS files etc.
So for clarity instead of:-
HTML requested, (200ms round trip to server)
HTML loaded and parsed, links to CSS and JS found required to render the initial page content
CSS and JS requested. (200ms round trip to server)
CSS and JS loaded
Enough to render the page.
Instead you have
HTML requested, (200ms round trip to server)
HTML loaded, all required CSS and JS inlined in HTML
Enough to render the page
This may not seem like a lot but that 200ms can make a huge difference to perceived speed.
Also this is a simplified example, often a page requires 20 requests or more to render the above the fold content. Due to the limitations of 8 requests at a time (normally) this means there could be up to 3 round-trips of 200ms waiting for server responses.
Looking at your site you will be getting a false reading for "critical request chains" as there is no HTML served in the initial page as it is all rendered via JS. This could be why you do not think there is a problem.
If you do the above your will get low FCP and LCP times assuming your images are optimised.
There are some Gatsby users complaining recently about a huge fall and decreasing of Lighthouse score and everyone agrees on the same: the score of the Lighthouse has decreased a lot due to a high LCP (Largest Contentful Paint) response time.
This is the result of the changes in the new Lighthouse version (v6) which in fact, introduces the LCP as a new concept and metric. As you can see, the changelog was written in may but depends on the user, and on the site, the changes arrive on different dates (I guess that depends on Google's servers and the time that this change replicates through them).
According to the documentation:
Largest Contentful Paint (LCP) is a measurement of perceived loading
experience. It marks the point during page load when the primary–or
"largest"–content has loaded and is visible to the user. LCP is an
important complement to First Contentful Paint (FCP) which only
captures the very beginning of the loading experience. LCP provides a
signal to developers about how quickly a user is actually able to see
the content of a page. An LCP score below 2.5 seconds is considered
'Good.'
As you said, this metric is closely related to FCP and it's a complement of it: improving FCP will definitely improve the LCP score. According to the changelog:
FCP's weight has been reduced from 23% to 15%. Measuring only when the
first pixel is painted (FCP) didn't give us a complete picture.
Combining it with measuring when users are able to see what they most
likely care about (LCP) better reflects the loading experience.
You can follow this Gatsby GitHub thread to check how the users bypass this issue in other cases.
In your case, I would suggest:
Delete your <p> and check the score again to see the changes (just to be sure).
Wrapping your <p> inside a <div>.
Splitting your <p> in 2 or 3 small pieces to make them available for the LCP as well as FCP.
If none of the above work, I would try playing on <p>'s height to see if it improves the score.
I guess that Gatsby (and also Google) are working on adjusting this new feature and fix this bad score issues.

React Native Requiring the same image multiple times

I've got this sort of general question when it comes to requiring images in React Native. I've got this application that uses the same little red x and green check mark for form validation 6-8 times in a single form component. How it stands right now, I have a require in every 'source' prop when used.
Is it best practice to require the image once at the top of the component as a variable and just use the variable 6-8 times instead of the calling require for each one of them?
Yes, requiring it once at the top is far superior.
But hey, you can go even further. If this is an icon you need to use across the app, it might be worth making a very very simplistic component that renders this image. It's easier to reference <GreenCheck/> than to require an image and stick it into an img tag repeatedly.

How to make JSON loads faster with large data (on HTTP or WebPage)

. Requesting the page(on HTTP or WebPage), it is very slow or even crash unless i load my JSON with fewer data. I really need to solve this since sooner or later i will be using large amount of data frequently. Here are my JSON data. --->>>
Notes:
1. The JSON loads only String and Integer.
2. I used to view my JSON in JSONView more like treeview using plugin
from GoogleChrome.
I am using angular and nodejs. tq
A quick resume of all the things that comes to my mind :
I had a similar issue once. My solutions may make the UI change.
Pagination
I doubt you can display that much data at one time, so the strategy should be divising your data in small amounts and then only load more when the client ask for it.
This way, the whole data is no longer stored in RAM as it is currently. This is how forums works (only 20 topics at a time).
Just imagine if StackOverflow make you load the whole historic of questions in the main page, how much GB would your navigator need just for that ?
You can use pagination in a classic way (button with page number, like google), or in an infinite scroll way, as you want.
For that you need to adapt your api and keep track of the index of the pages you already loaded at every moment in your Front. There are plenty of examples in AngularJS.
Only show the beginning of the data
When you look at Facebook comments, you may have a "show more" button. In their case, maybe it's to not break the UI, but it can also be used to not overload data.
You can display only the main lines of your datae (titles or somewhat) and add a button so the user can load more details if they want.
In your data model, the cost seems to be on the second level of "C". Just load data untill this second level, and download the remaining part (for this object) only if the user asks for.
Once again, no need to overload, your client's RAM will be thankfull, and your client's mobile 3G too.
Optimize your data stucture
If this is still not enough :
As StefanArya said in comment, indeed remove the "I" attribute, which is redundant with the JSON key.
Remove the "I" as you can use Object.keys() to get key name.
You also may don't need that much precision on your floats.
If I see any other ideas, I'll edit this post later.

how can I exclude an element from an Angular scope?

my premise was wrong. while AngularJS was certainly slowing things down, it was not due to the problem I describe below. however, it was flim's answer to my question - how to exclude an element from an Angular scope - that was able to prove this.
I'm building a site that generates graphs using d3+Raphael from AJAX-fetched data. this results in a LOT of SVG or VML elements in the DOM, depending on what type of chart the user chooses to render (pie has few, line and stacked bar have many, for example).
I'm running into a problem where entering text into text fields controlled by AngularJS brings Firefox to a crawl. I type a few characters, then wait 2-3 seconds for them to suddenly appear, then type a few more, etc. (Chrome seems to handle this a bit better.)
when there is no graph on the page (the user has not provided enough data for one to be generated), editing the contents of these text fields is fine. I assume AngularJS is having trouble when it tries to update the DOM and there's hundreds SVG or VML elements it has to look through.
the graph, however, contains nothing that AngularJS need worry itself with. (there are, however, UI elements both before and after the graph that it DOES need to pay attention to.)
I can think of two solutions:
put the graph's DIV outside the AngularJS controller, and use CSS to position it where it's actually wanted
tell AngularJS - somehow - to nevermind the graph's DIV; to skip it over when keeping the view and model in-sync
the second option seems preferable to me, since it keeps the document layout sane/semantic. is there any way to do this? (or some, even-better solution I have not thought of?)
Have you tried ng-non-bindable? http://docs.angularjs.org/api/ng.directive:ngNonBindable
<ANY ng-non-bindable>
...
</ANY>

Resources