My problem is that I'm unable to understand how server-side rendering single-page application frameworks like Next.js receive prerendered, full HTML on the front end without having to rewrite the entire page. For example, the nextjs website states the following:
By default, Next.js pre-renders every page. This means that Next.js generates HTML for each page in advance, instead of having it all done by client-side JavaScript. Pre-rendering can result in better performance and SEO.
Each generated HTML is associated with minimal JavaScript code necessary for that page. When a page is loaded by the browser, its JavaScript code runs and makes the page fully interactive. (This process is called hydration.)
I understand how this bolsters the responsiveness of an SPA on first page load. But after that first load, what makes server-side rendering compatible with SPAs? I think this arises from a fundamental misunderstanding that I can't catch, so here are some further questions I have that might help you to catch it:
Do SSR SPAs always respond with full prerendered HTML, or only for first page loads?
If the former is true, then on subsequent responses, how does the client efficiently render only the difference rather than rewriting the whole page?
Otherwise, if the latter is true, then how does an SSR SPA backend tell when it's responding to a first request, when the response should be the whole HTML, versus a subsequent request, when the bulk of the page is already there and all that needs to be sent is some relatively minimal information?
What am I misunderstanding about what makes SSR compatible with SPAs?
Many thanks in advance to everyone who tackles this question!
Welcome to Stackoverflow :)
Usually SSR is used for initial rendering of the page, so for the first question - for the first page load
This is necessary, so the SPA will be more SEO-compatible (there also might be some performance improvements with this, but it's usually secondary goal) and Search Engine bots will be able to parse pages without the need for JS
The SSR usually has several important steps:
Server render
Sending of rendered data to browser
Hydration. Hydration - is a ReactJS (since we're talking about next.js here) 'function' that binds the server-rendered HTML to the React on the Frontend. So basically binds server-rendered DOM to virtualDOM
After the hydration step you basically have a fully-functional normal SPA, which has it's own routing and able to fetch data on itself.
Usually you have different endpoint on the BE to fetch the data and to render the page. So basically the rendering process on the BE is somewhat similar to what you have on the FE - your application backend fetches the data from separate endpoints, applies all of the logic and renders the app.
Btw, to ensure that SSR works properly, there is a principle called 'Isomorphic code' - i.e. if you're using a library for data fetching, it has to support both node.js and browser APIs. That's why, for example, you'd have to use Next.js own Router when you have a Next.js application - it just works on both FE and BE unlike react-router, which would require some additional steps to achieve that
Related
Plain React apps are called SPA because they have only one html page which is the index.html. But that is not the case for next.js. So can we call a next.js app a single page application?
Good question.
Normally we don't call an engine SPA or not. For instance, React can do SPA, but it can do non-SPA work as well. The same applies to the NextJS as well.
Just to follow your dictionary. NextJS by default is not SPA based due to its hybrid nature, because it publishes each page as a separate entry point for everything under /pages. Of course if you only have one page index.js, then technically it's a SPA again. I guess it depends on how you structure your pages.
Yes, we can call nextjs a SPA even if you create many pages, but to keep it as such you should use client side transitions (no <a> tag).
First of all, SPA = Single Page Application
I assume the question intend SPA vs MPA (Multi Page Application)
The answer is Nextjs can do both depending if you use next link
https://nextjs.org/docs/api-reference/next/link
That will perform a client side transition without reloading the page from the server, assuming your page is static of coure and does not use getServerSideProps(). You can check that by watching the Network in Chrome debug tools.
How that works ? On first page load, all pages get loaded in js inside the vdom and any page can get injected into your browser dom fromthe client js not from the server.
To give it an MPA behavior, you can also link a relative page with a simple html anchor tag <a>, that will reload the page from the server, as the a tag uses no js for that. What could be the advantage of that ? keeping code clean from next maybe, and if you mess the react vdom (which you shouldn't) you're sure to get a fresh page again. A disadvantage of that is that you loaded all content in js initialy anyway, so reloading with anchors is a waste of resources and load time.
Note using PWA (Progressive Web App) is also an option to optimise the user experience, and it is also supported by nextjs
see
https://nextjs.org/docs/faq
and
https://github.com/vercel/next.js/tree/canary/examples/progressive-web-app
Using a client side routing server side doesn't forge entire page to serve a client, but datas are downloaded from webapp "on demand".
So, in this scenario, if you see html code you could see something like this below:
<body>
<div class="blah">{{content}}</div>
</body>
I know that prerender strategy can be used and i think that probably google crawler is very smarty and can see contents anyway, but the question is:
is it good this approach on seo side?
Using prerender strategy server needs to generate page with content. Could be that a penalty in page speed factor?
Thank you in advance to everyone.
As you've mentioned google is pretty smart and from a recent experience, is able to fetch some of your site's static content even when using client-side rendering. However when it comes to client-side routing it's not quite there yet so if you need to have SEO, server side rendering frameworks like nuxt.js should be your go-to.
but datas are downloaded from webapp "on demand"
The same thing applies when you do asynchronous fetches (download on demand as you've described it), imagine the data inside your {{ content }} was coming from an external API, as far as I'm concerned no crawler at this time is able to deal with this, so your content area would just be empty. So generally speaking, when SEO is an requirement, so is server-side rendering.
Using prerender strategy server needs to generate page with content.
Could be that a penalty in page speed factor?
Yes and no. Load times will certainly go up a little, but when using client-side rendering, the client needs to render the page after loading it, so this time just gets shifted to your server. This applies again to asynchronous data fetching. The delivery of the site will take longer, but the data it has to fetch will already be there, so the client wont have to do it (SSR frameworks allow you to fetch data and render it before sending the site to the client). If you accumulate everything, there shouldn't be a huge difference in time from sending the request to actually seeing the rendered page in your browser.
I am starting a new project and one of the requirements is potentially for the app to work when javascript is turned off.
I have been using reactjs for a while but solely as a SPA javascript app that runs in the browser.
I don't think I quite understand how a react SSR application works.
next.js is intriguing. Will a SSR application work without javascript?
If not can someone explain how an SSR application works or what problem it is solving.
SSR means the server will render the initial state of the application before sending it to the client on the initial run. This makes it so that:
Crawlers see actual content and can index your site based on it.
In most cases it speeds up your application since it puts the responsibility of the initial request on the server which is usually faster than most commercial computers
An SSR cannot work without javascript. It'll render initially but then subsequent executions and state changes will not be possible.
I suppose if your application doesn't require any subsequent state changes or provide further functionalities after the first load then it wouldn't need javascript.
In my web app, I don't want to use hash-based routing, I don't want to see any # appear in my URLs. I want to use RESTful URLs, e.g., http://www.example.com/blog/id.
react-router can deal with client-side routing quite well, but if a user hit enter on the browser address bar or refresh the page, the request will be sent to the web server and then the web server has to understand the URL and handle the routing.
Isomorphic is a good solution to this situation since it can render any page on both client-side and server-side. Actually there are many react starter kit projects on Github which claim to be isomorphic.
In my opinion, isomorphic looks beautiful but it's too expensive to write code: you need to make you react components render successfully both on client-side and server-side, which needs developers to make great efforts.
So here is my question, can I just make the react-router be isomorphic, not the entire code?
Yes. You can use react-router for a purely front-end (non-ismorphic) app with HTML5 history.
The routing is determined client side, so react-router will spit out the expected page.
However, whilst you don't need to write any server side code, you will need to configure the web server to point your routes to the correct place. This usually means pointing every single request - or every single valid request - to the same HTML file or entry point. Exactly how you do this depends on what you're using to serve your pages - Express, Apache etc.
I hope that makes sense.
Pardon my English, it is a second language. The whole point of an isomorphic app, as opposed to a regular client-side SPA is so the client doesn't have to download the whole JS file initially which results in really slow initial load time.
I've been trying to teach myself server-side rendered React, and after watching countless videos around the concept and following countless tutorials on the actual implementation, I still can't get my head around this (at least this is how I understand it):
Despite the server conditionally rendering pages and sending props to the client on url change, the client side still uses a router that includes all the entry points for the app (by requiring all of them, and then loading the file based on the url location). Doesn't that means all the files are included in the main client JS file anyways since it's already been required by the client-side router? Doesn't that defeat the whole purpose of server-rendered React? Or am I thinking about this the wrong way?
In short, how does an isomorphic React app really works with a client-side router that includes (by requiring them) all of the app's entry points?
I'm not sure that "The whole point of an isomorphic app [...] is so the client doesn't have to download the whole JS file initially which results in really slow initial load time" is necessarily true. I think the primary reason people do this is for SEO reasons and to improve perceived load time. You still get the benefit of showing the users the page before they have to load all the JavaScript (e.g. yes, they have to load all the JS, but it's OK because they already have most/all of the content). The app upgrades to an SPA transparently, providing a seamless experience for the user.
That said, you can implement a system where you don't have to load all the JS at once with something like webpack's code splitting. There's even a simple React Router example that does this.