What page version does SvelteKit return if SSR and Prerender are enabled for the page? - sveltekit

What page version will be returned to a client, if both SSR and Prerender options enabled for the page in SvelteKit?
Is there any reason to have both SSR and Prerender enabled for a route?

Yes, it's perfectly normal to have both enabled.
prerender - whether the page should be generated at build time.
ssr - whether the page component's HTML is in the generated page
So if you have the following +page.svelte...
<h1>Hello world!</h1>
... setting export const prerender = true will generate a index.html file at build time. If SSR is enabled (which is the default), then that index.html will contain the <h1> tag present in the page - the +page.svelte component was rendered on the server and the output was placed into the prerendered index.html file. If SSR is disabled with export const ssr = false, then the generated HTML will be an empty shell and the <h1> will be rendered on the page via client-side JavaScript.
The shell might look something like this. Note that there is no <h1> - when the user visits the page, they will have to wait for the client-side JavaScript to run and insert the <h1> into the page.
<head>a bunch of link tags showing which stylesheets to load</head>
<body>
<div></div>
<script src="/wherever/the/built/scripts/are"></script>
</body>
With prerender enabled, the user will be served a static HTML page when they visit the website, instead of needing to run code on a server to generate the page. However, the content of that page will be different depending on if ssr was enabled at build time.
In general, SvelteKit recommends keeping SSR enabled:
Server-side rendering (SSR) is the generation of the page contents on the server. SSR is generally preferred for SEO. While some search engines can index content that is dynamically generated on the client-side it may take longer even in these cases. It also tends to improve perceived performance and makes your app accessible to users if JavaScript fails or is disabled (which happens more often than you probably think).

Related

Is Next.js app an SPA or single page application?

Plain React apps are called SPA because they have only one html page which is the index.html. But that is not the case for next.js. So can we call a next.js app a single page application?
Good question.
Normally we don't call an engine SPA or not. For instance, React can do SPA, but it can do non-SPA work as well. The same applies to the NextJS as well.
Just to follow your dictionary. NextJS by default is not SPA based due to its hybrid nature, because it publishes each page as a separate entry point for everything under /pages. Of course if you only have one page index.js, then technically it's a SPA again. I guess it depends on how you structure your pages.
Yes, we can call nextjs a SPA even if you create many pages, but to keep it as such you should use client side transitions (no <a> tag).
First of all, SPA = Single Page Application
I assume the question intend SPA vs MPA (Multi Page Application)
The answer is Nextjs can do both depending if you use next link
https://nextjs.org/docs/api-reference/next/link
That will perform a client side transition without reloading the page from the server, assuming your page is static of coure and does not use getServerSideProps(). You can check that by watching the Network in Chrome debug tools.
How that works ? On first page load, all pages get loaded in js inside the vdom and any page can get injected into your browser dom fromthe client js not from the server.
To give it an MPA behavior, you can also link a relative page with a simple html anchor tag <a>, that will reload the page from the server, as the a tag uses no js for that. What could be the advantage of that ? keeping code clean from next maybe, and if you mess the react vdom (which you shouldn't) you're sure to get a fresh page again. A disadvantage of that is that you loaded all content in js initialy anyway, so reloading with anchors is a waste of resources and load time.
Note using PWA (Progressive Web App) is also an option to optimise the user experience, and it is also supported by nextjs
see
https://nextjs.org/docs/faq
and
https://github.com/vercel/next.js/tree/canary/examples/progressive-web-app

Next.js - Client Side Navigation vs. changes in html

I'm currently working through the next.js tutorial, but I'm struggling to understand the following:
The tutorial is telling me here, that clicking a <Link> element won't trigger a server request, and instead do "Client-side navigation", i.e. adjusting the page content with js:
The Link component enables client-side navigation between two pages in the same Next.js app. Client-side navigation means that the page transition happens using JavaScript
Three questions right away:
This sounds like regular SPA react routing without next.js etc. to me. But it isn't, right?
The <Link> still works if I disable javascript in Chrome dev tools. This shows that the transition is actually not happening with js. Is this a contradiction to their statement?
When I right click the page and click "view source" (in Chrome) I get different HTML before and after clicking the <Link>. (Contratry to the "regular" react behavior). How can this be considered "Client-side navigation"?
Going further in the tutorial, they tell me here:
By default, Next.js pre-renders every page. This means that Next.js generates HTML for each page in advance, instead of having it all done by client-side JavaScript
This statement sound like a contradiction to the other one quoted above. Can you help me to clarify this? When I click the <Link> what exactly is happening? Is a new html file loaded? If so, how can this happen "client side". Thank you!
Here is how I understand it:
The approach Next.js takes for client-side navigation is a mixture of SPA-style navigation and traditional navigation
In SPA-style navigation, a "link" is a component with some JS logic. It does not have a <a> element. When you click on it, some JS will run to render the new page. If you disable JS then you can't navigate to a new page;
In traditional navigation, a link is really a <a> element. If you click on it, then the browser will discard the current page and load the new page entirely. If you disable JS then you can still navigate to the new page;
In Next.js navigation, a "link" is a component with some JS logic, but it also has this <a> element under the hood. When you click on it, some JS will run to render the new page and prevent the default <a> navigation. So even if you disable JS, you'll still be able to navigate in a traditional fashion through the <a> element. The pre-rendered page will be fetched and loaded. This is really useful in terms of SEO (as crawlers typically don't have JS enabled), and is the main problem Next.js wants to solve.
In short, when JS is enabled, Next.js navigation behaves just like in SPAs; when JS is disabled, it behaves just like in traditional websites.
Update:
I made a video to further demonstrate the concept: https://www.youtube.com/watch?v=D3wVDE9GGVE
There!
I think the main concept that you should be getting familiar with Next.js framework is Server Side Rendering, which basically means that all contents of a page are pre-processed in the server, that ships to the browser the files already rendered, saving resources from the client-side of an application.
By default, all of your Next.js pages are pre-rendered when you use the build command.
Next.js also has its own <Link /> component which uses the next-router module to navigate between the pages.
By default, every <Link /> component in a page tells Next.js to pre-fetch that page and it's resources ( which will also be rendered by the server on the initial request from the browser ) and be "instantly available" when you click it.
I think the main basic difference than a regular SPA is that in those, when you change pages, it takes longer because they won't be already available to you.
AFAIK:
This sounds like regular SPA react routing without next.js etc. to me. But it isn't, right?
Yes. It's like a regular SPA react routing. When user clicks on Link, it will do it in JS just like CRA.
The still works if I disable javascript in Chrome dev tools. This shows that the transition is actually not happening with js. Is this a contradiction to their statement?
This is happening because, if you disable JS, what you get is a simple <a href="..." />. ie, browser will still count it as a anchor element, click events will work as expected because it's just our old HTML.
When I right click the page and click "view source" (in Chrome) I get different HTML before and after clicking the . (Contratry to the "regular" react behavior). How can this be considered "Client-side navigation"?
Your answer lies in the next statement where:
By default, Next.js pre-renders every page. This means that Next.js
generates HTML for each page in advance, instead of having
it all done by client-side JavaScript
ie, when you "View source" browser will hit the server and show you the HTML. Since Next prerenders, it will be different for different pages. This has nothing to do with client navigation. This behaviour of showing browser response in view source can vary from browser to browser though.

Why og:image does not rendered with React?

Try to have thumbnail when sending website link in Facebook message. Started to use s-yadav/react-meta-tags followed tutorial but image is not present after link sent.
Link: https://ticket-44-portal-staging.herokuapp.com/buyTicket?status=1959165690885328
applied following code in React componenet:
return (
<div className="container">
<MetaTags>
<title>{searchedEventTime.name}</title>
<meta name="description" content={searchedEventTime.description} />
<meta property="og:title" content={searchedEventTime.name} />
<meta
property="og:image"
content={`https://ticket-t01.s3.eu-central-1.amazonaws.com/${eventId}.cover.jpg`}
/>
</MetaTags>
I can see rendered meta tags in HTML, why it isn't work?
It is because the website is a single-page app. Before the JavaScript is loaded, everything rendered by React is not there yet(including those meta tags). You can verify it by right-clicking the page and select "view source", you will see that inside body, there is only a <div id="root"></div>. The problem is that many search engines and crawlers don't actually run JavaScript when they crawl. Instead, they look at what's in the initial HTML file. And that's why Facebook cannot find that "og:image" tag. There are two ways to solve this problem.
TL;DR Host your app on Netlify if you can. They offer prerendering service.
First, you may look at prerendering which is a service to render your javascript in a browser, save the static HTML, and return that when the service detects that the request is coming from a crawler. If you can host your React on Netlify, you can use their free prerendering service(which caches prerendered pages for between 24 and 48 hours). Or you can check out prerender.io. This is the solution if you don't want to move to another framework and just want to get the SEO stuffs working.
Another more common way to deal with this problem is do static site generation(SSG) or server side rendering(SSR). These mean that HTML content is statically generated using React DOM server-side APIs. When the static HTML content reaches client side, it will call the hydrate() method to turn it back into a functioning React app. Two most popular frameworks are Gatsby.js and Next.js. With these frameworks, you will be writing React and JSX code like you already do. But they offer more to power your app, including SSG, SSR, API routes, plugins, etc. I'd recommend you to check out their "Get started" tutorials. Transferring from a create-react-app to one of these frameworks can take less than a day, depending of the size of your project.
Next.js Tutorials
Gatsby.js Tutorials
After implementing one of these solutions, visit Facebook Sharing Debugger to ask Facebook to scape your page again.

Universal React without Javascript

What are best practices for invoking different CSS when the client for my server-rendered universal React page has Javascript disabled?
I've tried including Modernizr with className="no-js" on the html tag but it wasn't edited.
I've managed to get something to work as shown below but there must be a better way.
<link rel="stylesheet" href="/public/css/script.css" />
<noscript>
<link rel="stylesheet" href="/public/css/noscript.css" />
</noscript>
I've looked for examples but haven't found anything relevant.
In a server rendered application, you can have one of many data sources:
query param
cookie
POST data
The only input for server side code is the request made when loading up a page in your app in your browser. You may choose to configure the server during its boot (such as when you are running in a development vs. production environment), but that configuration is static.
The request contains information about the URL requested, including any query parameters, which will be useful when using something like React Router. It can also contain headers with inputs like cookies or authorization, or POST body data.
In a script/noscript mapping, you can setup one of several defaults:
Render noscript as the default, then override it by injecting script specific CSS via the CSSOM
Serve a noscript specific route as the default, then jump to the script route based on a cookie
Create a form which posts scripting capabilities to the server, then use that as a jump page to avoid noscript/script in the markup
References
redux / Server Rendering: Processing Request Parameters
Rendering noscript version serverside without reuse markup warnings in React

Handle meta tags and document title in React for crawlers

I have a React (redux) app that uses client side rendering. I wanted my sites description and title to be crawlable by google (they seem to crawl async stuff cause my site shows up fine in google with text from my h1 tags) so I found a library called react-helmet which builds on react-document-library. This library allows me to change the document title and description depending on what route I am currently on.
So here are my questions:
Currently (1 week later) my google search results are unchanged which makes me think either google hasn't crawled my site or google crawled it but didn't notice the dynamic change of description and just used my h1 tags. But how can I check which one has happened?
I notice Instagram have a client side rendered app but somehow when I check the page source they have already changed the title and description on each page even though the body tag is just an empty div as is typical of a client side rendered app. I don't get how they can do that.
react-helmet
Follow the React Helmet server rendering documentation: https://github.com/nfl/react-helmet#server-usage.
Use Google Search Console to see how Google crawls your site, and to initiate a crawl/index of your pages: https://www.google.com/webmasters/tools/
As for how instagram can show meta tags in a client-side app – they probably render and serve static content server-side when they detect a crawler or bot is viewing it. You can do this yourself for your content without converting your entire app to server-side rendering. Google prerendering services. (I won't mention any examples because I don't want to boost their SEO without having an opinion on them.)
Another option is to render your React app statically, and serve it when necessary. See Graphcool's prep (seems slightly outdated), react-snap, and react-snapshot. All render the site on a local server and download the rendered html files. If all you need is the <head> then you should be good!

Resources