Largest contententful paint (LCP) on lighthouse is a p tag. (Using gatsby) - reactjs

I don't know why my LCP would be a p tag, and I have no idea what I would do to reduce the size of it. Sometimes it gets up to 2.6s and gives a yellow rating(instead of green).
This is the p tag. All of those classes are bootstrap classes.
<p className="text-center mb-md-5 mt-0 mb-5">{aboutText}</p>
This is the variable aboutText
const aboutText = `Suddenly Magazine highlights the uniqueness of Saskatchewan, and its sudden rise in popularity and growth mentioned in publications such as USA Today and the New York Times.
Advertorials and Articles focus on its rare & particular tourism, its passionate sports, its character, and the prosperous opportunity for businesses and artists influenced by a Saskatchewan setting.
It is centred in Saskatoon, but contributors range from Lac La Ronge in the North, to provincial boundaries east and west, to the Outlaw Caves near the US border.`
The domain is https://suddenlysask.com

So why is your LCP a p tag?
Its only on mobile a p tag, and here take a look at the mobile size.
Its clearly to see that the p tag takes the most place here.
You could try to make the image bigger on mobile devices, so lighthouse will count the image as the LCP.
Another solution is to split up your p tag into 2 smaller p tags
Another solution could be (witch is not recommended) to cut your p tag slightly out of the viewport because...
The size of the element reported for Largest Contentful Paint is
typically the size that's visible to the user within the viewport. If the
element extends outside of the viewport, or if any of the element is
clipped or has non-visible overflow, those portions do not count
toward the element's size.
I guess your bad result comes from this line here:
<link data-react-helmet="true" rel="preload" href="https://fonts.googleapis.com/css?family=Montserrat|Helvetica+Neue|Helvetica|Arial&display=swap">
Why does it take up to 2.6 sec?
Here is what i guess:
Loading the google font can take its time and its not guaranteed that it loads always exactly the same time, so when the font is loaded it will swap your fonts and that means the p tag swaps. That means that the p tag with the new font is treated as new LCP.
For testing purposes you could try to remove the link and see if it affects your speed score at your LCP
At the end, i would split the paragraph up into 2 smaller paragraphs so that the image is the LCP. i think its the easiest solution.

People seems to completely misunderstand the purpose of the Largest Contentful Paint metric. It is designed to show you when the majority of the above the fold content is ready.
What item is the Largest Contentful Paint is not as important as when it occurs. What item is only useful in determining what could be slowing your page down.
It is the main metric in determining when 'above the fold' content is painted sufficiently that an end user would see the page as "complete" (this is perceived completeness, there can still be stuff loading lower down the page / in the background).
The suggestions of splitting the paragraph, wrapping it in a div, making it taller etc. etc. serve no purpose, they just shift the LCP onto something else (possibly) making your score look better but not actually fixing the problem.
What you want to do is optimise the initial content on the page.
For this you want to serve just the 'above the fold' HTML along with the CSS and JS required for above the fold content.
Then you serve everything else.
This article is a good introduction to critical JS and CSS https://www.smashingmagazine.com/2015/08/understanding-critical-css/
However in a nutshell inlining critical CSS and JS means that the CSS and JS required to render the initial content on the page should be inline within the HTML. Now I am guessing with something like Gatsby you would inline the critical JS that renders the above the fold content, above the fold CSS etc. but the principle is the same.
The key is that the above the fold content should all be served (except for non vector images) within the HTML so that there is no round-trip time waiting for CSS files, JS files etc.
So for clarity instead of:-
HTML requested, (200ms round trip to server)
HTML loaded and parsed, links to CSS and JS found required to render the initial page content
CSS and JS requested. (200ms round trip to server)
CSS and JS loaded
Enough to render the page.
Instead you have
HTML requested, (200ms round trip to server)
HTML loaded, all required CSS and JS inlined in HTML
Enough to render the page
This may not seem like a lot but that 200ms can make a huge difference to perceived speed.
Also this is a simplified example, often a page requires 20 requests or more to render the above the fold content. Due to the limitations of 8 requests at a time (normally) this means there could be up to 3 round-trips of 200ms waiting for server responses.
Looking at your site you will be getting a false reading for "critical request chains" as there is no HTML served in the initial page as it is all rendered via JS. This could be why you do not think there is a problem.
If you do the above your will get low FCP and LCP times assuming your images are optimised.

There are some Gatsby users complaining recently about a huge fall and decreasing of Lighthouse score and everyone agrees on the same: the score of the Lighthouse has decreased a lot due to a high LCP (Largest Contentful Paint) response time.
This is the result of the changes in the new Lighthouse version (v6) which in fact, introduces the LCP as a new concept and metric. As you can see, the changelog was written in may but depends on the user, and on the site, the changes arrive on different dates (I guess that depends on Google's servers and the time that this change replicates through them).
According to the documentation:
Largest Contentful Paint (LCP) is a measurement of perceived loading
experience. It marks the point during page load when the primary–or
"largest"–content has loaded and is visible to the user. LCP is an
important complement to First Contentful Paint (FCP) which only
captures the very beginning of the loading experience. LCP provides a
signal to developers about how quickly a user is actually able to see
the content of a page. An LCP score below 2.5 seconds is considered
'Good.'
As you said, this metric is closely related to FCP and it's a complement of it: improving FCP will definitely improve the LCP score. According to the changelog:
FCP's weight has been reduced from 23% to 15%. Measuring only when the
first pixel is painted (FCP) didn't give us a complete picture.
Combining it with measuring when users are able to see what they most
likely care about (LCP) better reflects the loading experience.
You can follow this Gatsby GitHub thread to check how the users bypass this issue in other cases.
In your case, I would suggest:
Delete your <p> and check the score again to see the changes (just to be sure).
Wrapping your <p> inside a <div>.
Splitting your <p> in 2 or 3 small pieces to make them available for the LCP as well as FCP.
If none of the above work, I would try playing on <p>'s height to see if it improves the score.
I guess that Gatsby (and also Google) are working on adjusting this new feature and fix this bad score issues.

Related

React error in render/flush: RangeError in flush RangeError: Maximum call stack size exceeded

I've been in the process of rewriting an old AngularJS app in React (actually it's using preact, chosen by the developer who started this project initially).
This app handles large deeply nested objects that get be displayed via Material UI accordions and tables. The data is more WIDE than deep, but at any rate, React has trouble rendering it all without this RangeError.
I've been dancing with this issue for a while now and have avoided it by strategically managing accordions and not rendering data for accordions that are not open.
I've commonly seen this reported as a recursion issue, and I've carefully reviewed the ode to confirm there is no recursion involved. Plenty of iteration, but no recursion.
Please note the stack trace, it's hitting this in the flush() function, which is not in our application code, but in the Chrome debugger VM. I've set breakpoints and it appears to be something related to DOM operations as the objects being flushed are React elements. Here's a code snippet from the point where this error is hit:
function flush(commit) {
const {
rootId,
unmountIds,
operations,
strings,
stats
} = commit;
if (unmountIds.length === 0 && operations.length === 0) return;
const msg = [rootId, ...flushTable(strings)];
if (unmountIds.length > 0) {
msg.push(MsgTypes.REMOVE_VNODE, unmountIds.length, ...unmountIds);
}
msg.push(...operations); <--- error occurs here when operations.length too long
And the stack trace logged when error occurs:
VM12639:1240 Uncaught (in promise) RangeError: Maximum call stack size exceeded
at flush (<anonymous>:1240:8)
at Object.onCommit (<anonymous>:3409:19)
at o._commit.o.__c (<anonymous>:3678:15)
at QRet.Y.options.__c (index.js:76:17)
at Y (index.js:265:23)
at component.js:141:3
at Array.some (<anonymous>)
at m (component.js:220:9)
The error is occurring if operations is too large. Normally it will be anywhere from a dozen or so in length up to maybe 3000, depending on what's going on, but when I try to load our page displaying the wide/deep nested object this number is more like 150000, which apparently is choking the spread operator.
My sense is that this type of app is a challenge for React. I cannot think of another example of a React app that displays data the way we do with this. If anyone here has experience with this sort of dataset and can offer suggestions as to how to make this work, please share.
My guess is I'm going to need to somehow break this object up into smaller chunks that represent smaller updates, but I'm posting here in case there's something I can learn.
It looks similar to this open issue on the React repo, only it happens in a different place (also in dev tools). Might be worth reporting your issue there too. So probably React is otherwise "fine" rendering this amount of elements, though you'll inevitably get slow performance.
Likely the app is just displaying too much data, or doing it inefficiently.
but when I try to load our page displaying the wide/deep nested object this number is more like 150000, ...
150000 DOM operations is a really high amount. Either your app really does display a whole lot of elements, or the old AngularJS app had too many wrapper elements and these were preserved. Since you mention it concerns data tables, it's probably the first reason. In any case complex applications always need some platform specific optimization.
If you can give an idea about the intended use case, or even better, share (parts of) the code, that would help others to give more targeted advice. Are the 150k operations close to what would happen in real world usage, or is it just a very inflated number for stress testing? Do you see any other performance regressions, compared to the Angular app, with very complex objects? How many tables are on the screen at a time?
A few hundreds of visible elements on the screen already gets quite cramped. So where would all these extra operations coming from? Either you're loading a super long page of which a user can only see a few percent at the same time, or the HTML structure is unnecessarily deeply nested.
Suggested performance improvements
I wouldn't say React isn't suitable for really large amounts of data, but you do need to watch out for some things yourself. React is only your vehicle to apply changes to the DOM. Putting a large amount of elements in the DOM is always going to lead to decreased performance, and is something you usually want to avoid.
In this case you could consider whether it's necessary to display all the table's data, which is probably the bulk of the operations. Using pagination would resolve the problem, and might even make it more user friendly.
If that's not an option, you maybe can use a library like react-lazyload to show/hide the items as they enter/exit the visible part of the table. To achieve this, use their unmountIfInvisible prop. You can then replace a complex data row with a single element that has the same height. The last is important to preserve the scroll height.
<LazyLoad
height={100}
offset={100}
unmountIfInvisible
placeholder={<tr height={100}/>}
>
<MyComplexDataRow />
</LazyLoad>
This way your data table never consists of much more complex elements than can be seen in the viewport. You probably need to tune the offset a bit so that it's always ready in time as it's benig scrolled.

What is the real weight of npm packages?

I want to add an image carousel to a profile page, and allow the user to view the images in fullscreen mode thanks to a modal. It means - if I'm not mistaken - that the carousel will be imported twice: once in the profile component, and another one on top of it when the modal opens.
It is a heavy process, and I'm afraid of performance issues. I thought about creating my own carousel, but there are already many packages that perfectly deal with hand gestures on mobile, etc. However, their weight is sometimes dreadful.
For instance, the library react-awesome-slider - which seems perfect - weights 666kb! However, on Bundlephobia, it is supposed to only weight 36.2kb or 8.2kb gzipped. Who is right?
Will react-awesome-slider weight 2*666kb, 2*36.2kb or 2*8.2kb in my final bundle? What is the maximum weight recommended to keep a high level of fluidity/performance?
This looks like premature optimization. Don't care about your bundle size in this manner - if 3kb gzip is a lot or not. Simply, if you need that library use it. You will understand that having library for summarizing of two numbers might not be necessary before any bundle size issue will appear.
Bundle size that you always care about is the gzipped value, that's what client receives and has to "download" - that takes time. But as you can imagine, downloading 30kb on your computer at home is not an issue. On old device in middle of Blairwitch forest it might be.
Also, it gets send to client once per page/application so if your modal has it and your page has it it won't be included twice. Imagine having some library like Lodash, which is big and used (if I exaggerate) in every function, do you think it be included 100 times?
Try to optimize the user experience in terms of ui/ux, that will be the first one user will quit your page for, not that he had to download 30kb of carousel, he will not even notice!

Polymer controlling number of DOM nodes on a page

On youtube polymer channel there is a talk about performance. One of
key concepts they say is to have less Nodes as possible when
application starts, loading others (Nodes) in async mode.
With this concept after all, if I have a huge application, I will end up with everything loaded and many many nodes on page.
One of the key element in polymer is iron-pages. It can communicate with app-route and other app elements. It works by hiding inactive pages (DOM Nodes) with display:none depending on a state (most often URL).
So here is the picture:
I have 20 different pages. After everything loaded I have all this pages in my document with display:none and one page that is active.
Questions:
1) Is that true that polymer concept is to have all nodes (pages) hidden rather than dynamically rendering/removing depending on a state?
2) If yes, does not that affect the browser performance comparing to dynamically rendering/removing like in Meteor?
3) What should I do with all the listeners/observers on pages that are inactive (display:none)? Should they be stopped when webcomponent becomes invisible?
I see that only dom-repeat dom-if and iron-list removes and adds content dynamically. All other components on the page, including forms, views stay on page forever.
1) definitely wrong
See for example https://elements.polymer-project.org/elements/app-route
2) it does, therefore this should be avoided
3) not applicable ;-)
Also https://elements.polymer-project.org/elements/iron-list is built to only render what is visible for the exact same purpose.

how can I exclude an element from an Angular scope?

my premise was wrong. while AngularJS was certainly slowing things down, it was not due to the problem I describe below. however, it was flim's answer to my question - how to exclude an element from an Angular scope - that was able to prove this.
I'm building a site that generates graphs using d3+Raphael from AJAX-fetched data. this results in a LOT of SVG or VML elements in the DOM, depending on what type of chart the user chooses to render (pie has few, line and stacked bar have many, for example).
I'm running into a problem where entering text into text fields controlled by AngularJS brings Firefox to a crawl. I type a few characters, then wait 2-3 seconds for them to suddenly appear, then type a few more, etc. (Chrome seems to handle this a bit better.)
when there is no graph on the page (the user has not provided enough data for one to be generated), editing the contents of these text fields is fine. I assume AngularJS is having trouble when it tries to update the DOM and there's hundreds SVG or VML elements it has to look through.
the graph, however, contains nothing that AngularJS need worry itself with. (there are, however, UI elements both before and after the graph that it DOES need to pay attention to.)
I can think of two solutions:
put the graph's DIV outside the AngularJS controller, and use CSS to position it where it's actually wanted
tell AngularJS - somehow - to nevermind the graph's DIV; to skip it over when keeping the view and model in-sync
the second option seems preferable to me, since it keeps the document layout sane/semantic. is there any way to do this? (or some, even-better solution I have not thought of?)
Have you tried ng-non-bindable? http://docs.angularjs.org/api/ng.directive:ngNonBindable
<ANY ng-non-bindable>
...
</ANY>

Showing 1 million rows in a browser

Our Utilty has one single table, and it has 10 million to 50 million rows, There may be a case we need to show 50 million rows in a single page html client page, To show the rows in browser we use jQuery in UI.
To retrieve rows we use Hibernate and use Spring for MVC. I am looking for best practice in retrieving the rows and showing in UI. Should I retrieve a bulk of thousands rows or two thousand rows in Hibernate and buffer to Web Client or a best practice is there ?
The best practice is not to do this. It will explode the browser memory and rendering engine, and will take too much time to load.
Add a search form to your webapp, make the end user search for what he's interested about, and only display the N first search results, just like Google does.
Nobody is able to do anything meaningful with 50 million rows without searching anyway.
i think you should use scroll pagination (when user reaches to almost bottom of page makes ajax call and load data).
Just for example quick google example & demo
and if your data is tabular then you can use jQGrid
Handling a larger quantity of data in an application must be done via virtualization. While it's true that the user will be overwhelmed by millions of records, it's not exactly true that they can't do stuff with it, nor that such quantities of data are unfathomable.
In practice and depending on what you're doing you'll note that this limit crops up on you with just thousands of records. Which frankly is very little data. Data centric apps just need a different approach, altogether, if they are going to work in a browser and perform well.
The way we do this is quite simple but not all that straightforward.
It helps to decide on a fixed height, because you will need to know the max height of a scrollable container. You then render into this container the subset of records that can be visible at any given moment and position them accordingly (based on scroll events). There are more or less efficient ways of doing this.
The end goal remains the same. You basically need to cull everything which isn't directly visible on screen in such a way that the browser isn't paying the cost of memory and layout logic for the app to be responsive. This is common practice in game development, only the world that is visible right now on screen is ever present at any given moment. So that's what you need to do to get large quantities of stuff to behave well.
In the context of a browser, anything which attributes to memory use and layout/render cost needs to go away if it's isn't absolutely vital.
You can also stagger and smear recalculations so that you don't incur the cost of whatever is causing the app to degrade on every small update. The user can afford to wait 1 second, if the apps remains responsive.

Resources