When exactly is Next.js “build-time” happening? - reactjs

I'm reading Next.js fetching data part of the documentation and one question came to my mind.
Next.js can fetch data with getStaticProps, getStaticPaths, getInitialProps and getServerSideProps, right?
But some happens on Build-time as I quote:
getStaticProps (Static Generation): Fetch data at build-time
When is this Build-time happening?
Is it when we run npm run build? (To build the production build)
Or when the user accesses our Next.js app? (On each request)

Build time happens when you're building the app for production (next build). Runtime happens when the app is running in production (next start).
getInitialProps (gIP) runs on both the client and the server during runtime for every page request. Most common use case is retrieving some sort of shared data (like a session that lets the client and server know if a user is navigating to a specific page) before the requested page loads. This always runs before getServerSideProps. While its usage is technically discouraged, sometimes it absolutely necessary to do some logic on both the client and server.
getServerSideProps (gSSP) only runs on the server during runtime for every page request. Most common use case would be to retrieve up-to-date, page-specific data (like a product's pricing and displaying how much of it is in stock) from a database before a page loads. This is important when you want a page to be search engine optimized (SEO), where the search engine indexes the most up-to-date site data (we wouldn't want our product to show "In stock - 100 units," when it's actually a discontinued product!).
getStaticProps (gSProps) only runs during build-time (sort of). It's great for sites whose data and pages don't update that often. The advantage of this approach is the page is statically generated, so if a user requests the page, they'll download an optimized page where the dynamic data is already baked into it. Most common use case would be a blog or some sort of large shopping catalog of products that may not change that often.
getStaticPaths (gSPaths) only runs during build-time (sort of). It's great for pre-rendering similar paths (like a /blog/:id) that require dynamic data to be used at build-time. When used in conjunction with gSProps, it generates the dynamic pages from a database/flat file/some sort of data structure, which Next can then statically serve. An example use case would be a blog that has many blog posts which share the same page layout and similar page URL structure, but require the content to be dynamically baked into the page when the app is being built.
Why are gSProps and gSPaths sort of ran during build-time? Well, Next introduced revalidate with fallback options that allows these two Nextjs functions to be ran during runtime after a timeout in seconds. This is useful if you want your page to be statically regenerated, but it should only be regenerated every n amount of seconds. The upside is that the page doesn't need to request dynamic data when navigated to, but the downside is that the page may initially show stale data. A page won't be regenerated until a user visits it (to trigger the revalidation), and then the same user (or another user) visits the same page to see the most up-to-date version of it. This has the unavoidable consequence of User A seeing stale data while user B sees accurate data. For some apps, this is no big deal, for others it's unacceptable(†).
† If you're using a version of Next v12.2.0+, you can use On-demand Revalidation to mitigate a stale static page from being shown to users if/when its data has been updated.
And lastly, there's client-side rendering (CSR) which are assets requested at run-time on the client (the browser) which aren't vital for SEO or can't be run on the server-side (like attaching an Event Listener to the document; this can't be done because server's don't have DOMs). Most common use case would be a user-specific dashboard page that is only relevant to that user and therefore doesn't need to be indexed by a search engine or some sort of JavaScript that manipulates the DOM and therefore can't be run on the server.
Other things of note: gIP and gSSP are render blocking. Meaning, the page won't load until their code resolves/returns props. This has the unavoidable consequence of showing a blank page before the page loads. This also means slower page response times where: Page is requested, gIP/gSSP runs code that blocks page from loading, gIP/gSSP code resolves, assets are then downloaded, page begins to load the JavaScript on the client while the HTML is being loaded into the DOM, JavaScript is done running, page is then rehydrated and then it becomes interactive. In summation, this results in a slower TTI.

Related

Fetching data every time page is reloaded

I am creating an eCommerce-type website with react and Laravel with the REST APIs. Every time user enters page www.mywebsite.com/products a rest API is called to get all the products, and it takes like a second to load all of them. If the user leaves the page and enters it again, the same request is called, and the same products are loaded.
My question is: What's the best approach for this situation? Maybe I should somehow fetch the products only once, store them inside localStorage and get them using Context? On the other hand, most of the websites I visit seem to load products instantly, so maybe even the REST API is the wrong approach for this kind of website?
I would suggest to check your DB queries and investigate why this request takes so long. Have a look at the laravel Telescope package: https://laravel.com/docs/9.x/telescope
In the request tab you can follow your requests and see which queries are the most expensive ones. My guess is that you're lazy loading some relations on each product.
Next I would propably cache the articles (expect maybe the stock information). Use the built-in feature of laravel or maybe a third party package like this one
Only then I would go for client-side optimizations. Here you should consider maybe a state management extension like akita.

Cache issues: React + REST server behind CDN

I am looking for a pattern that would allow me to better the UX for my users. I have a REST server running behind CloudFront being consumed from a plain React application on the frontend.
I'll simplify my example to illustrate my issue.
I have an endpoint called GET /posts/<id>. When the browser asks for it, it comes with a max=age=180 which means it would get stored in the browser's cache and any subsequent call to GET /posts/<id> will be served from the browser's cache for the duration of those 180 seconds, after which it will hit the CDN again to try and obtain a fresh copy.
That is okay for most users. I don't mind if updates to any post to delay up to 3 minutes before they're propagated to all the users. But there is one user who's the author of this post. That user can make changes to this post using PATCH /posts/<id>. Let's call that user The Editor.
Here's a scenario I have right now:
The Editor loads up the post page which then calls GET /posts/5
The CDN serves the latest copy to the front end.
the Editor then makes a change to the post and submits it to be back end via PATCH /posts/5.
The editor then refreshes his browser tab using Command-R (or CTRL-R).
As a result, the front end then requests GET /posts/5 again -- but gets the stale copy from before the changes because 180 seconds haven't passed yet since the last GET and the GET issued after the PATCH
What I'd like the experience to be is:
The Editor loads up the post page which then calls GET /posts/5
The CDN serves the latest copy to the front end.
The editor then makes a change to the post and submits it to be back end via PATCH /posts/5.
After a Command-R browser tab refresh the GET /posts/5 brings back a copy of the data with the changes the editor made with PATCH right away, regardless of the 180 seconds of ttl before a fresh copy can be obtained.
As for the rest of the users, it's perfectly okay for them to wait up to 180 seconds before the change in the post propagates to them when the GET /posts/5
I am using Axios, but I do not that SWR and React-Query support mutations. To my understanding this would allow the editor to declare a mutation for the object he just PATCH'ed on the server, so that any subsequent calls he makes to GET /posts/5 will be served from there, until a fresher version can be obtained from the backend.
My questions are:
Can SWR with "mutations" serve the mutated object via the GET /posts/5 transparently?
Will the mutation survive a hard browser tab refresh? or a browser closure, re-opening and subsequent /GET posts/5?
Is there another pattern/best practice to solve that?
TL;DR: Just append a harmless, gibberish querystring to the end of the request GET /posts/<id>?version=whatever
Good question. I must admit I don't know the full answer to this problem, but I want to share one well-known technique among frontend devs.
The technique is called cache busting. I'm not sure if this is the best practice, but I'm pretty sure it's widely practiced, since it's so straight-forward to understand.
Idea is simple. When you add a changed querystring to the end, you effectively change the URL, thus no cache is hit, you evade the whole cache problem.
So the detail steps to a solution for your particular use case would go like this:
Normally you'll just request GET /posts/<id> for all users
When a user logs in, a hash key is generated from whatever algorithm. For simplicity let's just use increasing integer and call it version. You store this version in localStorage so it can survive through page refresh.
Now you need to distinguish scenario when the user is viewing his own posts or other's posts. When guy is viewing his own, you always use GET /posts/<id>?version=n
Whenever the user edits his post and hits save button, you bump version from n to n+1
Next time he goes to post view page, the app requests GET /posts/<id>?version=n+1 which is not cached, and would retrieve the up-to-date content.
One last thing, make sure your server safely ignores that ?version=n querystring.
I'm sure there're other solutions to this problem. I'm no expert of server config and HTTP headers so I'm not getting into that topic, but there must be something to look for.
As of pure frontend solution, there's Serivce Worker API for you to consider. The main point of this API is to enable devs to programmatically control cache strategies.
With this API, you could leave your current app code as-is, just install a service worker, then you could use the same cache busting technique in the background to fetch new content, or just delete the cache (using Cache API) when user edits, or even fake a response for the GET /posts/<id> from the PATCH /posts/<id> that user just send.
Depending on what CDN you use, you can invalidate a cache manually when publishing updates to a post. For example cloudfront lets you specify which path you want to fetch fresh on the next request.
For sites with lots of traffic but few updates this works pretty well, and is quite simple to implement. For sites with a lot of authors and frequently changing content you would need to get more creative though.
One strategy I've used in the past is using a technique called object versioning, where instead of invalidating the cache to an object you just publish a version of it with a timestamp. This would also mean you need to publish a manifest file when your frontend loads. The manifest contains the latest timestamps of all the content the page needs to load, and is on a much shorter TTL than the rest of the content. When you publish a new version of a post you would update the timestamp in the manifest, and the frontend pulls the latest version of it the next time the page loads.

What is the point of single page large applications if you have to split your code to improve performance?

I've been reading about lazily loading code in react.
With lazy loading, only the needed code will be loaded and doing so,
your initial loading will be faster (because you will load much less
code) and your overall speed will be much faster being on demand.
This is what I've understood.
In a single page application, the entire page is loaded onto the browser initially. We use module bundlers like webpack to bundle the application into a single page. Everything's great.
Now, if the application size is large, the load time would increase. To improve performance, we can divide the bundle into separate chunks that will be loaded only when needed.
My question is, if we have to divide our page into chunks, is it still a single page application because the browser will have to request the server for these chunks whenever they are needed? I feel like there's a gap in my knowledge and I don't know what's missing.
The traditional web applications used to work with postbacks to the server to fetch HTML to render a new page. Then AJAX came into the picture and applications were able to render data asynchronously thereby making sure that the user doesn't need to wait the time when the browser would refresh on postback to render the new page.
Modern Javascript libraries like angular, react etc. bring single-page application (SPA) model which runs inside a single page without requiring the browser to reload as the user navigates the site (ie. with only a single container page for entire application,like index.html). Even with code splitting and lazy loading of chunks, the application developer can ensure that user gets a pleasant experience like displaying a progress bar or loading spinner instead ( using React Suspense ). This is much better than the frustrating experience of waiting for a page reloading everytime. Webpack ensures the chunkhash changes only for chunks which have changes in source code from previous build. This helps to take advantage of browser caching so that the request for chunks don't need to go to server everytime. Hope this was helpful !

Does a "static" rendering of a React page always duplicate content in JS objects?

Looking at this isomorphic react page: http://jlongster.com/Presenting-The-Most-Over-Engineered-Blog-Ever
I see that there is a Javascript variable at the bottom of the static content that represents the static content.
So the content is replicated when downloading.
Is this mandatory for the way react works? Any more efficient methods?
You need to have the data included so that you can mount the application on the client. The initial client side render must produce the same virtual dom as the server gave.
You could fetch it async, and wait until it downloads to mount the app, but there will be a significant delay between when the user reaches the page, and when JavaScript is actually applied. It'll also double the load on the database for initial requests.
Overall, server rendering will improve the time it takes for the user to see content, improve SEO, enable page caching with a CDN, and other benefits that outweigh the costs by far.
It's less important for sites behind a login, and you can often save significant development time by omitting it.

Why is single-page apps faster when it generally has to make more requests per page?

How are single-page apps (SPAs) supposed to be faster when generally SPAs have to make multiple requests to get data for different parts on the page? As opposed to rendering server side, where the browser only has to make a single request to get the whole page?
I also remember reading somewhere that opening/closing a web request is the bottleneck sometimes in web requests.
So why does an approach that makes more requests per page is supposed to make web sites faster?
Because you only load what you need.
For example, on a "normal" web page, the menu, sidebar, etc. would have to be rerendered on each page, but with an SPA only the content gets changed.
In addition, think of this case: A website that displays 100,000 items on the front page (with pictures). In the traditional case, it will take a long time to load the page, but with an SPA you only load the "first screen" (i.e. what the user can see), and load the rest as he scrolls down.
In other words, SPAs aren't magic: it's just that they only need to update the bits of the page that change, which makes the response time lower for users (i.e. they can "use" the new contact faster).
If well done, they are faster because:
Part of the server workload is offset to the client.
Only the needed page fragments are loaded at any given time.
Redundant templating code is reduced. One template can style many items, as opposed to having to output a lot of HTML for a full page at once.
They also facilitate lazy loading and the download of new data during idle times, and parallelism: the concurrent downloads of elements.
Usually SPA are built with a lazy mode approach: get the info only when you need it and if you need it.
Also usually the data coming to and from the spa is in a format (json for example) which focuses on the data only. The presentation layer is a concern of the SPA and all the required assets should be already loaded.
So usually they are faster and more maintainable.
It is not always the case though.

Resources