Reduce initial server response time with Netlify and Gatsby - reactjs

I'm running PageSpeed Insights on my website and one big error that I get sometimes is
Reduce initial server response time
Keep the server response time for the main document short because all
other requests depend on it. Learn more.
React If you are server-side rendering any React components, consider
using renderToNodeStream() or renderToStaticNodeStream() to allow
the client to receive and hydrate different parts of the markup
instead of all at once. Learn more.
I looked up renderToNodeStream() and renderToStaticNodeStream() but I didn't really understand how they could be used with Gatsby.
It looks like a problem others are having also
The domain is https://suddenlysask.com if you want to look at it
My DNS records

Use a CNAME record on a non-apex domain. By using the bare/apex domain you bypass the CDN and force all requests through the load balancer. This means you end up with a single IP address serving all requests (fewer simultaneous connections), the server is proxying to the content without caching, and the distance to the user is likely to be further.
EDIT: Also, your HTML file is over 300KB. That's obscene. It looks like you're including Bootstrap in it twice, you're repeating the same inline <style> tags over and over with slightly different selector hashes, and you have a ton of (unused) utility classes. You only want to inline critical CSS if possible; serve the rest from an external file if you can't treeshake it.

Well the behavior is unexpected, I ran the pagespeed insights of your site and it gave me a warning on first test with initial response time of 0.74 seconds. Then i used my developer tools to look at the initial response time on the document root, which was fairly between 300 to 400ms. So I did the pagespeed test again and the response was 140ms. The test was passed. After that it was 120ms.
See the attached image.
I totally think there is no problem with the site. Still if you wanna try i would recommend you to change the server or your hosting for once, try and go for something different. I don't know what kind of server you have right now where the site is deployed. You can try AWS S3 and CloudFront, works well for me.

Related

SSR vs CSR explanation

I've been working as a full stack web developer for over a year now. Nextjs/golang is our stack. Today I realized that I have no idea whether we use CSR or SSR. I think we use CSR but I'm not sure. I've gone down rabbitholes on this topic like 100 times but its never stuck.
First question:
When they say server side, do they mean on the backend? i.e. golang? or does that mean on the nextjs server? Whenever someone says server I think backend but I don't think this is correct.
Second question:
How does client side rendering work? Like I know the server sends javascript to the client then the client uses the javascript to build the page. But all of this javascript must make a bunch of requests to the server to grab data and html right? Like how would the javascript build the page otherwise? Is this why react/nextjs is written in jsx? So the server can send all the JSX which is actually just javascript to the client then the client can build the html?
Third Question:
If CSR has the client build the page, how would this ever work? Like what about all of the data that needs to be pulled from our database for specific users / etc etc. That can't be done directly from the frontend.
I tried reading tons of articles online! Hasn't clicked yet
You said the essential thing yourself: in client-side rendering, "the server sends javascript to the client, then the client uses the javascript to build the page." Hold on to that one point – everything else is secondary. The client "renders."
Now that "client-side" rendering capabilities have become so powerful – and, so varied – it has generally become favored. If you "tell the client what to do and then let him do it," you are more likely to get a consistently favorable outcome for most clients. Yes, the client will issue perhaps-many AJAX requests in carrying out your instructions. That is irrelevant.
CSR - server sends HTML that contains links to javascript (script tags). Clients then loads and executes JS (JavaScript typically contains fetching code). That means that each client will perform several round trips to the server to get HTML and then the data.
SSR - server sends HTML and embeds the necessary data in it
The client already has the data and HTML, so it can render it. SSR does fetch on each request, meaning the client still gets the latest data.
Benefits of using SSR compared to CSR is lower load time, it makes the website feel "faster" and also improves the ranking by search engine bots. On the other hand, the server does the rendering, which increases its burden (though fewer requests decreases it).
SSG is the same as SSR but fetching occurs at build time, the result page is computed only once and returned for each request. It is possible to use SSG with or without data.
Use SSG if possible, then mostly SSR. In some occasions it may be better to use CSR instead of SSR, though I'm not experienced enough to give the answer when.
Now answering your questions:
Yes, SSR happens on the server. If you use fetch function then it will work on client. But if you use getServerSideProps or getStatisSideProps then it will work on server. You can read from the file system, fetch public API or query the database, whatever you do in getStatisSideProps, getServerSideProps will run on the server, before returning the response.
Yes, you're correct. Client need the data to render the page, so it has to send requests to server and then render.
The third question is the same as the second. I hope the long answer I gave clarified your confusion.
Sorry for long answer.

Force updates on installed PWA when changing index.html (prevent caching)

I am building a react app, which consists in a Single Page Application, hosted on Amazon S3.
Sometimes, I deploy a change to the back-end and to the front-end at the same time, and I need all the browser sessions to start running the new version, or at least those whose sessions start after the last front-end deploy.
What happens is that many of my users still running the old front-end version on their phones for weeks, which is not compatible with the new version of the back-end anymore, but some of them get the updates by the time they start the next session.
As I use Webpack to build the app, it generates bundles with hashes in their names, while the index.html file, which defines the bundles that should be used, is uploaded with the following cache-control property: "no-cache, no-store, must-revalidate". The service worker file has the same cache policy.
The idea is that the user's browser can cache everything, execpt for the first files they need. The plan was good, but I'm replacing the index.html file with a newer version and my users are not refetching this file when they restart the app.
Is there a definitive guide or a way to workaround that problem?
I also know that a PWA should work offline, so it has to have the ability to cache to reuse, but this idea doesn't help me to perform a massive and instantaneous update as well, right?
What are the best options I have to do it?
You've got the basic idea correct. Why your index.html is not updated is a tough question to answer to since you're not providing any code – please include your Service Worker code. Keep in mind that depending on the logic implemented in the Service Worker, it doesn't necessarily honor the HTTP caching headers and will cache everything including the index.html file, as it seems now is happening.
In order to have the app work also in offline mode, you would probably want to use a network-first SW strategy. Using network-first the browser tries to load files from the web but if it doesn't succeed it falls back to the latest cached version of the particular file it tried to get. Another option would be to choose what is called a stale-while-revalidate strategy. That first gives the user the old file (which is super fast) and then updates the file in the background. There are other strategies as well, I suggest you read through the documentation of the most widely used SW library Workbox (https://developers.google.com/web/tools/workbox/modules/workbox-strategies).
One thing to keep in mind:
In all other strategies except "skip SW and go to the network", you cannot really ensure the user gets the latest version of the index.html. It is not possible. If the SW gives something back from the cache, it could be an old version and that's that. In these situations what is usually done is a notification to the user that a new version of the app has been donwloaded in the background. Basically user would load the app, see the version that was available in the cache, and SW would then check for updates. If an update was found (there was a new index.html and, because of that, new service-worker.js), the user would see a notification telling that the page should be refreshed. You can also trigger the SW to check for an update from the server manually from your own JS code if you want. In that situation, too, you would show a notification to the user.
Does this help you?

CDN serving private images / videos

I would like to know how do CDNs serve private data - images / videos. I came across this stackoverflow answer but this seems to be Amazon CloudFront specific answer.
As a popular example case lets say the problem in question is serving contents inside of facebook. So there is access controlled stuff at an individual user level and also at a group of users level. Besides, there is some publicly accessible data.
All logic of what can be served to whom resides on the server!
The first request to CDN will go to application server and gets validated for access rights. But there is a catch - keep this in mind:
Assume that first request is successful and after that, anyone will be able to access the image with that CDN URL. I tested this with Facebook user uploaded restricted image and it was accessible with the CDN URL by others too even after me logging out. So, the image will be accessible till the CDN cache expiry time.
I believe this should work - all requests first come to the main application server. After determining whether access is allowed or not, a redirect to the CDN server or access-denied error can be shown.
Each CDN working differently, so unless you specify which CDN you are looking for its hard to tell.

Securing a single page application built on react/react-router

I've been putting together a single-page application using React and React-Router and I can't seem to understand how these applications can be secured.
I found a nice clear blog post which shows one approach, but it doesn't look very secure to me. Basically, the approach presented in that post is to restrict rendering of components which the user is not authorized to access. The author wrote a couple more posts which are variations on the idea, extending it to React-Router routes and other components, but at their hearts all these approaches seem to rely on the same flawed idea: the client-side code decides what to do based on data in the store at the time the components are composed. And that seems like a problem to me - what's to stop an enterprising hacker from messing around with the code to get access to stuff?
I've thought of three different approaches, none of which I'm very happy with:
I could certainly write my authorization code in such a way that the client-side code is constantly checking with the server for authorization, but that seems wasteful.
I could set the application up so that modules are pushed to the client from the server only after the server has verified that the client has authority to access that code. But that seems to involve breaking my code up into a million little modules instead of a nice, monolithic bundle (I'm using browserify).
Some system of server-side rendering might work, which would ensure that the user could only see pages for which the server has decided they have authority to see. But that seems complicated and also seems like a step backward (I could just write a traditional web app if I wanted the server to do everything).
So, what is the best approach? How have other people solved this problem?
If you’re trying to protect the code itself, it seems that any approach that either sends that code to the client, or sends the code able to load that code, would be a problem. Therefore even traditional simple approaches with code splitting might be problematic here, as they reveal the filename for the bundle. You could protect it by requiring a cookie on the server, but this seems like a lot of fuss.
If hiding the internal code from unauthorized users is a requirement for your application, I would recommend splitting it into two separate apps with separate bundles. Going from one to another would require a separate request but this seems to be consistent with what you want to accomplish.
Great question. I'm not aware of any absolute best practices floating around out there that seem to outstrip others, so I'll just provide a few tips/thoughts here:
a remote API should handle the actual auth, of course.
sessions need to be shared, so a store like redis is usually a good idea, esp. for fast reads.
if you're doing server-side rendering that involves hydration, you'll need a way to share the session state between server and client. See the link below for one way to do universal react
on the client, you could send down a session cookie or JWT token, read it into memory (maybe using redux and keep it in your state tree?) and maybe use middleware (a la redux?) to set it as a header on requests.
on the client, you could also rely on localStorage to save the cookie/JWT
maybe split the code into two bundles, one for auth, one for the actual app logic?
See also:
https://github.com/erikras/react-redux-universal-hot-example for hydration example
https://github.com/erikras/react-redux-universal-hot-example/issues/608
As long as the store does not contain data that the user is not authorized to have, there shouldn't be too much of a problem even if a hacker checks the source and sees modules/links that he shouldn't have access to.
The state inside the store as well as critical logic would come from services and those need to be secured, whether it's an SPA or not; but especially on an SPA.
Also: server-side rendering with Redux isn't too complex. You can read about it here:
http://redux.js.org/docs/recipes/ServerRendering.html
It's basically only used to serve a root html with a predefined state. This further increases security and loading speeds but does not defy the idea behind SPAs.

What's the correct way of handling web requests during database maintenance?

Scenario: You are going to do scheduled database maintenance. You will hence be unable to serve dynamic content (just assume the caching system in front of the database also needs to be maintained).
During that time, what's the correct way of handling web requests trying to access a dynamic resource?
What's the correct HTTP error code, if any, that goes along with the notice that your service is currently not available? Should you use errors in the 5XX range?
What are the implications in terms of SEO? Will it hurt if search engine crawlers try to access your site and see lots of error codes or pages with the same notice instead of dynamic content? Can you easily recover from that?
503 Service Unavailable is the correct response to use in this situation.
Depending on how your site works, you could just put up a static HTML page replacing everything saying that the site is undergoing maintenance.

Resources