react: always fetch data or save data frontend - reactjs

So i am working on a IoT SaaS project in React.
The user selects a sensor and a time range and receives data visualized in charts with a resolution about 5 minutes.
My question is regarding best practices when handling fetching and saving of this data on the front-end.
I have tried always fetching which works fine, but makes the system kind of slow.
This is especially true while users are quickly switching back and forth between sensors.
I have also tried saving the data, just as a json in the react state.
This significantly increase performance, but has a lot of other problems.
The browser starts complaining about ram uses and can sometimes get out of memory errors.
There is also a lot of needed data handling as saving several non continuous data-ranges for the same sensor, locating and merging date-range overlaps etc...
So i am wondering what is the best practice here, should i always fetch or save on front-end? are there any frameworks i could use helping me with the data handling front-end or do i have to do this manually.

Saving all data in front end is an antipattern. Because of memory and out-of-sync issues. To make your system work seemingly faster and use backend data you can try following:
Optimistic response. This technique uses some simplified parts of backend logic in front end, while doing actual request. So user will see result before backend data reaches browser. Lets say you are doing +1 operation on backend. User sends number 2 to perform this operation. So in your front end you can use something const optimisticResponse = (userData) => userData + 1. And then when you get data from backend you can overwrite value, of needed
GraphQL allows you reduce overhead by asking backend only for data you need.

Related

Fetching data every time page is reloaded

I am creating an eCommerce-type website with react and Laravel with the REST APIs. Every time user enters page www.mywebsite.com/products a rest API is called to get all the products, and it takes like a second to load all of them. If the user leaves the page and enters it again, the same request is called, and the same products are loaded.
My question is: What's the best approach for this situation? Maybe I should somehow fetch the products only once, store them inside localStorage and get them using Context? On the other hand, most of the websites I visit seem to load products instantly, so maybe even the REST API is the wrong approach for this kind of website?
I would suggest to check your DB queries and investigate why this request takes so long. Have a look at the laravel Telescope package: https://laravel.com/docs/9.x/telescope
In the request tab you can follow your requests and see which queries are the most expensive ones. My guess is that you're lazy loading some relations on each product.
Next I would propably cache the articles (expect maybe the stock information). Use the built-in feature of laravel or maybe a third party package like this one
Only then I would go for client-side optimizations. Here you should consider maybe a state management extension like akita.

Cache issues: React + REST server behind CDN

I am looking for a pattern that would allow me to better the UX for my users. I have a REST server running behind CloudFront being consumed from a plain React application on the frontend.
I'll simplify my example to illustrate my issue.
I have an endpoint called GET /posts/<id>. When the browser asks for it, it comes with a max=age=180 which means it would get stored in the browser's cache and any subsequent call to GET /posts/<id> will be served from the browser's cache for the duration of those 180 seconds, after which it will hit the CDN again to try and obtain a fresh copy.
That is okay for most users. I don't mind if updates to any post to delay up to 3 minutes before they're propagated to all the users. But there is one user who's the author of this post. That user can make changes to this post using PATCH /posts/<id>. Let's call that user The Editor.
Here's a scenario I have right now:
The Editor loads up the post page which then calls GET /posts/5
The CDN serves the latest copy to the front end.
the Editor then makes a change to the post and submits it to be back end via PATCH /posts/5.
The editor then refreshes his browser tab using Command-R (or CTRL-R).
As a result, the front end then requests GET /posts/5 again -- but gets the stale copy from before the changes because 180 seconds haven't passed yet since the last GET and the GET issued after the PATCH
What I'd like the experience to be is:
The Editor loads up the post page which then calls GET /posts/5
The CDN serves the latest copy to the front end.
The editor then makes a change to the post and submits it to be back end via PATCH /posts/5.
After a Command-R browser tab refresh the GET /posts/5 brings back a copy of the data with the changes the editor made with PATCH right away, regardless of the 180 seconds of ttl before a fresh copy can be obtained.
As for the rest of the users, it's perfectly okay for them to wait up to 180 seconds before the change in the post propagates to them when the GET /posts/5
I am using Axios, but I do not that SWR and React-Query support mutations. To my understanding this would allow the editor to declare a mutation for the object he just PATCH'ed on the server, so that any subsequent calls he makes to GET /posts/5 will be served from there, until a fresher version can be obtained from the backend.
My questions are:
Can SWR with "mutations" serve the mutated object via the GET /posts/5 transparently?
Will the mutation survive a hard browser tab refresh? or a browser closure, re-opening and subsequent /GET posts/5?
Is there another pattern/best practice to solve that?
TL;DR: Just append a harmless, gibberish querystring to the end of the request GET /posts/<id>?version=whatever
Good question. I must admit I don't know the full answer to this problem, but I want to share one well-known technique among frontend devs.
The technique is called cache busting. I'm not sure if this is the best practice, but I'm pretty sure it's widely practiced, since it's so straight-forward to understand.
Idea is simple. When you add a changed querystring to the end, you effectively change the URL, thus no cache is hit, you evade the whole cache problem.
So the detail steps to a solution for your particular use case would go like this:
Normally you'll just request GET /posts/<id> for all users
When a user logs in, a hash key is generated from whatever algorithm. For simplicity let's just use increasing integer and call it version. You store this version in localStorage so it can survive through page refresh.
Now you need to distinguish scenario when the user is viewing his own posts or other's posts. When guy is viewing his own, you always use GET /posts/<id>?version=n
Whenever the user edits his post and hits save button, you bump version from n to n+1
Next time he goes to post view page, the app requests GET /posts/<id>?version=n+1 which is not cached, and would retrieve the up-to-date content.
One last thing, make sure your server safely ignores that ?version=n querystring.
I'm sure there're other solutions to this problem. I'm no expert of server config and HTTP headers so I'm not getting into that topic, but there must be something to look for.
As of pure frontend solution, there's Serivce Worker API for you to consider. The main point of this API is to enable devs to programmatically control cache strategies.
With this API, you could leave your current app code as-is, just install a service worker, then you could use the same cache busting technique in the background to fetch new content, or just delete the cache (using Cache API) when user edits, or even fake a response for the GET /posts/<id> from the PATCH /posts/<id> that user just send.
Depending on what CDN you use, you can invalidate a cache manually when publishing updates to a post. For example cloudfront lets you specify which path you want to fetch fresh on the next request.
For sites with lots of traffic but few updates this works pretty well, and is quite simple to implement. For sites with a lot of authors and frequently changing content you would need to get more creative though.
One strategy I've used in the past is using a technique called object versioning, where instead of invalidating the cache to an object you just publish a version of it with a timestamp. This would also mean you need to publish a manifest file when your frontend loads. The manifest contains the latest timestamps of all the content the page needs to load, and is on a much shorter TTL than the rest of the content. When you publish a new version of a post you would update the timestamp in the manifest, and the frontend pulls the latest version of it the next time the page loads.

Forms: Should I do a POST on each page of form, or do a single POST at the end of the form flow?

I have a form that is four pages. The user clicks next and this leads them to the next page of the form. On the fourth page the form is complete.
What is the best practice?
Do I do a POST after each page, so 4 different times, or should I do one big POST on the last and final page pushing all the user data to the database?
Each page posts to a different endpoint.
My form is created using redux-form and react.
Either works, the main advantages I see are:
Sending one complete form - advantages:
No database pollution
Less network overhead
Sending 4 partial forms - advantages:
You can see where each user stopped - this may be useful data if they
are purchasing a service or signing up for an account. Do a lot of
people fill out the first two sections only to see the third and
navigate away?
You can use this to save the form server-side for people to complete
later. You can also do this with Redux / Local Storage for session /
browser storage, but you may want the functionality of a user
starting a form on one device and completing it on another, requiring
server-side storage of the form.
If you don't plan to implement the functionality of server-side storage, and if you don't need the extra analytic data of where they stop on the form - just go right ahead and send it all at once. I would suggest at a minimum, you try to save the form to Local Storage to make it easy for the user to pick up where they left off.
I want to say it depends on your database model and the data you are fetching from the form. It may be that the data retrieved from the first form is enough to do a desired database modification; in that case, it may be nicer to immediately send the POST data. However, if that data may be needed in a future query, it might be better to send it all at the end to avoid re-sending old data. Some may also argue that doing multiple posts is worse in terms of network usage
Note: most importantly try to avoid sending duplicate data

How to handle millions of records in react js and sails js

I have a table having millions of records. I am using Sails js as my server side code , React js to render data in view and Mysql as my DBMS. So what is the best method to retrieve the data in faster manner.
Like the end user does not feel like getting a huge amount of data which affects UI as well.
Shall I bring only 50 records first and show the pagination in bottom using pagination logic and then using socket.io fetch the rest in background ?
Or any good way of handling it ?
This really depends on how you expect your user to go through the data.
You will probably want an API call for getting only the first page of data (likely in such a way that you can fetch any page: api/my-data/<pagesize>/<pagenumber>).
Then it depends on what you expect your user to do. Is he going to click through every page to see all the data? In that case, it seems ok to load all the others as well as you mentioned. This seems unlikely to me, however.
If you expect your user to only view a few pages, you could load the data for the next page in the background (api/my-data/<pagesize>/<currentpage+1>), and then load the next page every time the user navigates.
Then you probably still need to support jumping to a certain page number, where you will need to check if you have the data or not, and then show a loading state (or nothing) while the data is being fetched.
All this said I don't see why you would need socket.io instead of normal requests (you really only need socket.io if the server needs to be abled to make 'requests' to the client so to speak)

Single Server request per page vs SPA Application

I had the idea to make a SPA application using angularJS and then just sending AJAX updates to the server when I need.
My initial idea would be make the client application fly, but if I have to do an AJAX round trip to the server, I think the time would be approximately the same as to request a single web page.
Requesting a page just has more bytes of data, is not like I'm requesting 20 resources like in this article: https://community.compuwareapm.com/community/display/PUB/Best+Practices+on+Network+Requests+and+Roundtrips
I would be requesting a page or resource per request.
So in the end even if I create my client side application as a SPA using angularJS, these requests (would have to be synchronous and show a please wait message while they don't return, as I don't want to user to take more actions before I make sure his request passes validation and is processed correctly) would take some time and make user wait, just about the same time as requesting a full page.
I think SPA pages would be very useful if I have like a wizard on my app with multiple pages/steps and at the end, submit the results of wizard, to the server, which I don't.
Also found this article:
https://help.optimizely.com/hc/en-us/articles/203326524-AngularJS-Backbone-js-and-other-Single-Page-Applications
One of the biggest advantages of Single Page Apps is that they reduce
data transfer. As a result, pages after the initial loading usually
can be displayed faster and seem more interactive.
But I don't believe this last quote is really true.
Am I right, or is there a way that I'm not seeing to build an application that would look like it's executing locally?
I know how guys will start saying "depends on what you want", but lets focus on this scenario where there's no wizards.
What ever you said is right. But most of the frameworks(Angular,BackBone) you take they are going to cache the templates of html on the browser so the rendering would be pretty fast compared to the normal applications. Traditional apps will have to fetch the html from the server for each request which is a time consuming one.
Hope this helps you!!!
If you are wanting to go through that syncronous server side validation step for each page request, then there is probably no big advantage to using AngularJS.
If you are requesting a page and then manipulating that page's contents once it's loaded you might want to consider AngularJS. A good example would be requesting a page that displays a list of items. Now let's say we want to search that list or order it in different ways. Rather than using AJAX to call the server to filter the list and then re-render it, it could be much faster to user AngularJS to filter and re-render the list without making any further requests to the server.

Resources