On youtube polymer channel there is a talk about performance. One of
key concepts they say is to have less Nodes as possible when
application starts, loading others (Nodes) in async mode.
With this concept after all, if I have a huge application, I will end up with everything loaded and many many nodes on page.
One of the key element in polymer is iron-pages. It can communicate with app-route and other app elements. It works by hiding inactive pages (DOM Nodes) with display:none depending on a state (most often URL).
So here is the picture:
I have 20 different pages. After everything loaded I have all this pages in my document with display:none and one page that is active.
Questions:
1) Is that true that polymer concept is to have all nodes (pages) hidden rather than dynamically rendering/removing depending on a state?
2) If yes, does not that affect the browser performance comparing to dynamically rendering/removing like in Meteor?
3) What should I do with all the listeners/observers on pages that are inactive (display:none)? Should they be stopped when webcomponent becomes invisible?
I see that only dom-repeat dom-if and iron-list removes and adds content dynamically. All other components on the page, including forms, views stay on page forever.
1) definitely wrong
See for example https://elements.polymer-project.org/elements/app-route
2) it does, therefore this should be avoided
3) not applicable ;-)
Also https://elements.polymer-project.org/elements/iron-list is built to only render what is visible for the exact same purpose.
Related
I don't know why my LCP would be a p tag, and I have no idea what I would do to reduce the size of it. Sometimes it gets up to 2.6s and gives a yellow rating(instead of green).
This is the p tag. All of those classes are bootstrap classes.
<p className="text-center mb-md-5 mt-0 mb-5">{aboutText}</p>
This is the variable aboutText
const aboutText = `Suddenly Magazine highlights the uniqueness of Saskatchewan, and its sudden rise in popularity and growth mentioned in publications such as USA Today and the New York Times.
Advertorials and Articles focus on its rare & particular tourism, its passionate sports, its character, and the prosperous opportunity for businesses and artists influenced by a Saskatchewan setting.
It is centred in Saskatoon, but contributors range from Lac La Ronge in the North, to provincial boundaries east and west, to the Outlaw Caves near the US border.`
The domain is https://suddenlysask.com
So why is your LCP a p tag?
Its only on mobile a p tag, and here take a look at the mobile size.
Its clearly to see that the p tag takes the most place here.
You could try to make the image bigger on mobile devices, so lighthouse will count the image as the LCP.
Another solution is to split up your p tag into 2 smaller p tags
Another solution could be (witch is not recommended) to cut your p tag slightly out of the viewport because...
The size of the element reported for Largest Contentful Paint is
typically the size that's visible to the user within the viewport. If the
element extends outside of the viewport, or if any of the element is
clipped or has non-visible overflow, those portions do not count
toward the element's size.
I guess your bad result comes from this line here:
<link data-react-helmet="true" rel="preload" href="https://fonts.googleapis.com/css?family=Montserrat|Helvetica+Neue|Helvetica|Arial&display=swap">
Why does it take up to 2.6 sec?
Here is what i guess:
Loading the google font can take its time and its not guaranteed that it loads always exactly the same time, so when the font is loaded it will swap your fonts and that means the p tag swaps. That means that the p tag with the new font is treated as new LCP.
For testing purposes you could try to remove the link and see if it affects your speed score at your LCP
At the end, i would split the paragraph up into 2 smaller paragraphs so that the image is the LCP. i think its the easiest solution.
People seems to completely misunderstand the purpose of the Largest Contentful Paint metric. It is designed to show you when the majority of the above the fold content is ready.
What item is the Largest Contentful Paint is not as important as when it occurs. What item is only useful in determining what could be slowing your page down.
It is the main metric in determining when 'above the fold' content is painted sufficiently that an end user would see the page as "complete" (this is perceived completeness, there can still be stuff loading lower down the page / in the background).
The suggestions of splitting the paragraph, wrapping it in a div, making it taller etc. etc. serve no purpose, they just shift the LCP onto something else (possibly) making your score look better but not actually fixing the problem.
What you want to do is optimise the initial content on the page.
For this you want to serve just the 'above the fold' HTML along with the CSS and JS required for above the fold content.
Then you serve everything else.
This article is a good introduction to critical JS and CSS https://www.smashingmagazine.com/2015/08/understanding-critical-css/
However in a nutshell inlining critical CSS and JS means that the CSS and JS required to render the initial content on the page should be inline within the HTML. Now I am guessing with something like Gatsby you would inline the critical JS that renders the above the fold content, above the fold CSS etc. but the principle is the same.
The key is that the above the fold content should all be served (except for non vector images) within the HTML so that there is no round-trip time waiting for CSS files, JS files etc.
So for clarity instead of:-
HTML requested, (200ms round trip to server)
HTML loaded and parsed, links to CSS and JS found required to render the initial page content
CSS and JS requested. (200ms round trip to server)
CSS and JS loaded
Enough to render the page.
Instead you have
HTML requested, (200ms round trip to server)
HTML loaded, all required CSS and JS inlined in HTML
Enough to render the page
This may not seem like a lot but that 200ms can make a huge difference to perceived speed.
Also this is a simplified example, often a page requires 20 requests or more to render the above the fold content. Due to the limitations of 8 requests at a time (normally) this means there could be up to 3 round-trips of 200ms waiting for server responses.
Looking at your site you will be getting a false reading for "critical request chains" as there is no HTML served in the initial page as it is all rendered via JS. This could be why you do not think there is a problem.
If you do the above your will get low FCP and LCP times assuming your images are optimised.
There are some Gatsby users complaining recently about a huge fall and decreasing of Lighthouse score and everyone agrees on the same: the score of the Lighthouse has decreased a lot due to a high LCP (Largest Contentful Paint) response time.
This is the result of the changes in the new Lighthouse version (v6) which in fact, introduces the LCP as a new concept and metric. As you can see, the changelog was written in may but depends on the user, and on the site, the changes arrive on different dates (I guess that depends on Google's servers and the time that this change replicates through them).
According to the documentation:
Largest Contentful Paint (LCP) is a measurement of perceived loading
experience. It marks the point during page load when the primary–or
"largest"–content has loaded and is visible to the user. LCP is an
important complement to First Contentful Paint (FCP) which only
captures the very beginning of the loading experience. LCP provides a
signal to developers about how quickly a user is actually able to see
the content of a page. An LCP score below 2.5 seconds is considered
'Good.'
As you said, this metric is closely related to FCP and it's a complement of it: improving FCP will definitely improve the LCP score. According to the changelog:
FCP's weight has been reduced from 23% to 15%. Measuring only when the
first pixel is painted (FCP) didn't give us a complete picture.
Combining it with measuring when users are able to see what they most
likely care about (LCP) better reflects the loading experience.
You can follow this Gatsby GitHub thread to check how the users bypass this issue in other cases.
In your case, I would suggest:
Delete your <p> and check the score again to see the changes (just to be sure).
Wrapping your <p> inside a <div>.
Splitting your <p> in 2 or 3 small pieces to make them available for the LCP as well as FCP.
If none of the above work, I would try playing on <p>'s height to see if it improves the score.
I guess that Gatsby (and also Google) are working on adjusting this new feature and fix this bad score issues.
I was trying to toggle the page with a buttonLink and list elements when I click on the buttonLink. I can basically see 2 popular possible approaches to this:
Using react router and create routes one for the list button and another for the list elements that opens up when clicking on on list button element which gets list elements as props in router
Pass the props to list element but render it in the same page, but with state toggle condition. Example -
this.state.showList===true ? :
Here I'm confused to decide between these 2 approaches. I preferably chose the 2nd approach to toggle between components based on state as I'm less comfortable with Router. But if the number of components in that page increases, it is difficult to maintain it using state value.
But I would like to know that standard approach for medium scale applications.
Any examples or pointers would be helpful, thanks.
I'm using both approaches and distinguish between encapsulated business logic and modifying logic. For example you have persons and relations between those persons. So I have 2 routes <personId>/details and <personId>/relations. With the first you will see person details like name, address, telephone numbers and with the second you see a network of related persons, who are connected with this person. For me I decide between those 2 approaches by what I expect to see after reloading the page (pressing F5). When I'm on the detail page I want to see the details again and also I want to see the relation network again.
But there are some modifying logics like adding a new a new telephone number. Normally you would do this by showing a modal dialog or expanding a form with some inputs and "OK"/"Cancel". When this dialog is opened and doing a page reload I would expect to see the person details again. So I'm implementing this dialog via {this.state.showAddTelephone && ...}
In my opinion just go with react-router. Routes are used in multiple areas of the app. The logic of conditional rendering can introduce too much complexity if it's being used in a lot of places. It would be easier to just take the declarative approach of the router with no logic behind it.
While going through react I came up with the following doubts:
DOM operations are very expensive
But eventually react also does the DOM manipulation. We cannot generate a view with Virtual DOM.
Collapsing the entire DOM and building it affects the user experience.
I never did that, Mostly what I do is changing the required child node(Instead of collapsing entire parent) or appending HTML code generated by JS.
Examples:
As a user scrolls down we append posts to parent element, even react
also have to do it in same way. No one collapse entire dom for that.
When a user comment on a post we append a div(comment element(HTML code)) to that particular post comment list. I think no one collapse entire post(dom) for that
3) "diffing" algorithm to check changes:
Why we need a algorithm to check changes.
Example:
If I have a 100 posts, whenever a user clicks on edit button of a particular post, i do it as follows
$(".postEdit").click(function(){
var post_id = $(this).data("postid");
//do some Ajax and DOM manipulation to that particular post.
})
I am telling the DOM to change particular element, then how does diffing help?
Am I thinking in a wrong way? If so, please then correct me.
You are correct that people don't tend to collapse and recreate the entire DOM when they need to update a single element. Instead, the best practice is to just rebuild the element you need. But what if a user takes an action that actually impacts lots of elements? Like it they star a post or something, you want to reflect that on the post and maybe in a stars count elsewhere on the page. As applications get complex enough changing all of the parts of a page that need to be changed becomes really complicated and error prone. Another developer might not be aware that there is a "stars count" elsewhere on the page and would forget to update it.
What if you could just rebuild the entire DOM? If you wrote your render methods and stored your state such that at any point, you could with certainty render the entire page from scratch? That removes a lot of these pain points, but obviously you lose all the performance gains you got from manually altering parts of the DOM by hand.
React and the virtual dom give you the best of both worlds. You get that optimized DOM updating, but as a developer you don't have to keep a mental model of the entire application and remember what you need to change for any given user or network input. The virtual dom will also potentially implement these changes more effectively than you by literally only rebuilding the elements you need. At some point you might be rebuilding more than you should "just in case".
Hope this sort of makes sense!
This discussion can be very helpful to understand Virtual DOM and it's implementation in different UI frameworks.
Why is React's concept of Virtual DOM said to be more performant than dirty model checking?
There are couple of other links as well which explains it in better way.
https://auth0.com/blog/face-off-virtual-dom-vs-incremental-dom-vs-glimmer/
http://teropa.info/blog/2015/03/02/change-and-its-detection-in-javascript-frameworks.html
i was wondering why - even on the simple SPA application with AngularJS there seems to be a DOM leakage. I may be misinterpreting this but the way I look at this is that DOM elements allocated are not being released properly.
The procedure to reproduce is as follows:
navigate to the page on the screenshot with simple AngularJS application
turn on timeline recording in developer tools
force garbage collection
add an item, and then remove it
force garbage collection
repeat last two steps for atleast 3 times
On the screenshot you can see that after you add an item and remove it there seems to be two more DOM elements more after garbage collection(jump from 502 to 504 DOM elements).
I was hoping that someone could shed some light on this before i get deeper on investigating what is happening. Reason for this test was more complicated AngularJS SPA that I am working on and which also seems to leak memory.
I'm doing a similar thing now. What I've noticed is a couple of things:
1) look at any usage of $(window).fn() -- where fn is any of the functions on the window object; if you're doing that more than once you're adding multiple event handlers to the global object which causes a memory leak
2) $rootScope.$watch() -- similarly, if you're doing this more than once (say, when loading a directive) then you're adding multiple handlers to the global object
3) In my tests (where I go back and forth between two pages) it seems that chrome consumes a large amount of memory (in my case, almost 1GB) before garbage collection kicks in. Maybe when you click the "garbage collection" chrome is not actually doing a GC? Or it's GC for javacsript but not for dom elements?
4) If you add an event handler to the dom element, then remove it from the dom, the handlers never get GC'ed.
my premise was wrong. while AngularJS was certainly slowing things down, it was not due to the problem I describe below. however, it was flim's answer to my question - how to exclude an element from an Angular scope - that was able to prove this.
I'm building a site that generates graphs using d3+Raphael from AJAX-fetched data. this results in a LOT of SVG or VML elements in the DOM, depending on what type of chart the user chooses to render (pie has few, line and stacked bar have many, for example).
I'm running into a problem where entering text into text fields controlled by AngularJS brings Firefox to a crawl. I type a few characters, then wait 2-3 seconds for them to suddenly appear, then type a few more, etc. (Chrome seems to handle this a bit better.)
when there is no graph on the page (the user has not provided enough data for one to be generated), editing the contents of these text fields is fine. I assume AngularJS is having trouble when it tries to update the DOM and there's hundreds SVG or VML elements it has to look through.
the graph, however, contains nothing that AngularJS need worry itself with. (there are, however, UI elements both before and after the graph that it DOES need to pay attention to.)
I can think of two solutions:
put the graph's DIV outside the AngularJS controller, and use CSS to position it where it's actually wanted
tell AngularJS - somehow - to nevermind the graph's DIV; to skip it over when keeping the view and model in-sync
the second option seems preferable to me, since it keeps the document layout sane/semantic. is there any way to do this? (or some, even-better solution I have not thought of?)
Have you tried ng-non-bindable? http://docs.angularjs.org/api/ng.directive:ngNonBindable
<ANY ng-non-bindable>
...
</ANY>