Currently, my project has two parts, one is before login, and one is after login.
What I want to achieve is, before login needs to be fast and SEO friendly, should I choose pre-render or SSR?
And after login, we can choose CSR (so the client is able to wait for the page to load).
Alternatively, can I do two CSRs, one for before login (fast load), and once client logged in, by JWT token, redirect to the after login CSR page?
Thanks
For pages that need to be crawled, most probably CSR is not an option. The question then becomes whether you choose to pre-render or SSR. The answer to that is that it depends.
Is the SEO content static, or does it depends on other some backend API response at a given time?
If it's static, pre-render should be enough for you. But if it depends on other APIs, the content could change during runtime, and you would have to do true SSR to accommodate that. SSR is more resource intensive on the server though.
As for the after login part, because it probably shouldn't be crawled by bots anyway, it is okay to do CSR for all the logged in pages. CSR alone doesn't mean you will have a significantly faster initial load though, there is a lot of factor to consider such as the HTML document size, network trip latency, the response time of the other services your own service is depending on, etc. BUT, along with using a service worker and using the app-shell model, CSR should almost always be faster compared to SSR. I would recommend looking into that to improve CSR speed. Link
It depends.
if SEO is irrelevant — e.g. an app that lives behind a login screen — then CSR is fine and you just need something like ReactJS
If you need a good SEO:
a) If you can predict the content to generate it in build time (eg: a blog), then you need SSG (static content created on build time) and should choose something like Gatsby or NextJS
b) If you can’t predict the content/possible request (eg: a search page), the server will need to generate the pages on demand, so you need dynamic SSR (content created on user access time) and should choose something like NextJS.
Note: NextJS allows you to selectively mix in the same project the 3 main rendering forms. For that reason is the best option if you need SEO.
Related
I have been working in nextjs since recently. I previously programmed in react, but the project requires me to use nextjs.
Briefly the application looks like this
Home page (simple backend data + form ), here I will 100% use SSR
Admin panel (/admin)
And here I am wondering whether to use SSR or CSR in admin page, I find conflicting information. If I use CSR in case of admin panel, will it somehow affect SEO? (I think it shouldn't, since the site requires a login anyway). I read that if the site requires authentication then there is no point in rendering the data on the server side. But on Reddit someone also wrote that simple admin pages can also be safely created using SSR. If so, does it give any advantages?
Or, however, would it be worthwhile to separate it into 2 different applications with the annotation (admin.mywebsite) in the domain? and what is the correct approach to such a problem?
I care about the best possible SEO, hence my questions.
If you don't need the casual user to see the admin panel, you can simply ignore it in your robots.txt file and use any preferred method (CSR or SSR), and it won't impact your SEO.
However, in the case you do want users to see the admin panel (for any future viewers), SSR is much more scrapable by web robots and is preferred for SEO.
I'm working on a project, which has two applications, one is an admin portal where users can customize and create their profile and other one is an application which will preview the profile user creates.
All good with the admin portal, which is developed with CRA - React App. Earlier I was planning creating profile app also with React. Anyhow considering SEO and performance I'm thinking about pre-rendering the profile whenever user changes data.
This will improve the performance by considerable margin and SEO as well. SSR is another good solution, however when there are thousands/millinos of users view single profile, there will be a huge demand for server performance. So that I'm planning to work on a POC, which will create static profile with partial cliend side js functionalities for each user and store it somewhere and serve it through CDN.
I want to know two things here -
To implement this, I'm having lack of knowledge in terms of available solutions. Solutions for storage, cdn.
How can we render static pages during the data changes.
To implement this, I'm having lack of knowledge in terms of available
solutions. Solutions for storage, cdn.
Check Cloudflare CDN. But there are many other proveriders.
How can we render static pages during the data changes.
If you are using pre-rendering, You can rebuild the page assets after a profile page is updated and redeploy them.
If you are using SSR then the server load will be high but no need to redeploy (as the page is rendered at runtime from the origin server always).
I would recommend you use Next.JS support for server rendering and pre-rending instead of implementing them on your own.
I am evaluating how is it possible to implement Server Side Rendering in SPA app with React and CMS as backend.
This is the approach I see Next.js suggest to have per-rendered and all most all CMS system suggests:
User request a page from react app running on Node server
Node server requests JSON data from CMS through fetch call
Then React App reads this JSON and transform HTML into String like renderToString() and sends the response back to the user.
The disadvantage of this approach is that if JSON data from CMS is huge then first request takes long time.
What alternate solution do you suggest?
Heyooo, Contentful DevRel here. 👋🏻
your concerns are absolutely valid.
And that's why Next.js just recently added advanced static pre-generation using getStaticProps. The goal is to tackle the long dynamic response times by pre-generating as much as possible. This way the user has a fast initial content paint, but can still enjoy all the dynamic benefits that come with a React application (Next.js usually follows an isomorphic JavaScript architecture)
The processing time you describe then is moved from dynamic request/response time into build-processes.
In general, when you're not dealing with millions of pages, I recommend giving static HTML a try. It makes applications often faster, safer, and more secure. For more complext and larger sites, Vercel is also experimenting with hybrid solutions that offer ways to only pre-generate certain pages. That's all very new though. :)
Using a client side routing server side doesn't forge entire page to serve a client, but datas are downloaded from webapp "on demand".
So, in this scenario, if you see html code you could see something like this below:
<body>
<div class="blah">{{content}}</div>
</body>
I know that prerender strategy can be used and i think that probably google crawler is very smarty and can see contents anyway, but the question is:
is it good this approach on seo side?
Using prerender strategy server needs to generate page with content. Could be that a penalty in page speed factor?
Thank you in advance to everyone.
As you've mentioned google is pretty smart and from a recent experience, is able to fetch some of your site's static content even when using client-side rendering. However when it comes to client-side routing it's not quite there yet so if you need to have SEO, server side rendering frameworks like nuxt.js should be your go-to.
but datas are downloaded from webapp "on demand"
The same thing applies when you do asynchronous fetches (download on demand as you've described it), imagine the data inside your {{ content }} was coming from an external API, as far as I'm concerned no crawler at this time is able to deal with this, so your content area would just be empty. So generally speaking, when SEO is an requirement, so is server-side rendering.
Using prerender strategy server needs to generate page with content.
Could be that a penalty in page speed factor?
Yes and no. Load times will certainly go up a little, but when using client-side rendering, the client needs to render the page after loading it, so this time just gets shifted to your server. This applies again to asynchronous data fetching. The delivery of the site will take longer, but the data it has to fetch will already be there, so the client wont have to do it (SSR frameworks allow you to fetch data and render it before sending the site to the client). If you accumulate everything, there shouldn't be a huge difference in time from sending the request to actually seeing the rendered page in your browser.
How are single-page apps (SPAs) supposed to be faster when generally SPAs have to make multiple requests to get data for different parts on the page? As opposed to rendering server side, where the browser only has to make a single request to get the whole page?
I also remember reading somewhere that opening/closing a web request is the bottleneck sometimes in web requests.
So why does an approach that makes more requests per page is supposed to make web sites faster?
Because you only load what you need.
For example, on a "normal" web page, the menu, sidebar, etc. would have to be rerendered on each page, but with an SPA only the content gets changed.
In addition, think of this case: A website that displays 100,000 items on the front page (with pictures). In the traditional case, it will take a long time to load the page, but with an SPA you only load the "first screen" (i.e. what the user can see), and load the rest as he scrolls down.
In other words, SPAs aren't magic: it's just that they only need to update the bits of the page that change, which makes the response time lower for users (i.e. they can "use" the new contact faster).
If well done, they are faster because:
Part of the server workload is offset to the client.
Only the needed page fragments are loaded at any given time.
Redundant templating code is reduced. One template can style many items, as opposed to having to output a lot of HTML for a full page at once.
They also facilitate lazy loading and the download of new data during idle times, and parallelism: the concurrent downloads of elements.
Usually SPA are built with a lazy mode approach: get the info only when you need it and if you need it.
Also usually the data coming to and from the spa is in a format (json for example) which focuses on the data only. The presentation layer is a concern of the SPA and all the required assets should be already loaded.
So usually they are faster and more maintainable.
It is not always the case though.