I have created a website that uses a large amount of data from the server, it slows down the display of data, I want to know what is the best way to save the data in the cache for reactjs and update it only if there is a change in data base thanks
There are lot of strategies that you can use to cache data:
Use a library like React Query or Relay that automatically caches data
Use a library like redux-persist along with redux to cache data.
Alternatively, if performance is impacted because you are showing a big list of data, you should use some virtualisation on the client like react-virtualised.
Related
I am developing a personal portfolio for myself using React and Gatsby, and I'm looking for a way to implement a gallery there with all my photography in it.
I need a way to efficiently store and retrieve large amounts of high-res images to use in the gallery. I was thinking about using an AWS S3 bucket (because they have a free tier) and write a simple API for retrieving these images in Node, but I want to know if there is a simpler/better option out there.
An S3 bucket is a good idea. If you use a CDN like cloudflare and add the correct cache headers you can limit the data in/data out.
Gatscby has a plug-in to do you can use so you wont need to write custom-code:
https://www.gatsbyjs.org/docs/deploying-to-s3-cloudfront/
So i am working on a IoT SaaS project in React.
The user selects a sensor and a time range and receives data visualized in charts with a resolution about 5 minutes.
My question is regarding best practices when handling fetching and saving of this data on the front-end.
I have tried always fetching which works fine, but makes the system kind of slow.
This is especially true while users are quickly switching back and forth between sensors.
I have also tried saving the data, just as a json in the react state.
This significantly increase performance, but has a lot of other problems.
The browser starts complaining about ram uses and can sometimes get out of memory errors.
There is also a lot of needed data handling as saving several non continuous data-ranges for the same sensor, locating and merging date-range overlaps etc...
So i am wondering what is the best practice here, should i always fetch or save on front-end? are there any frameworks i could use helping me with the data handling front-end or do i have to do this manually.
Saving all data in front end is an antipattern. Because of memory and out-of-sync issues. To make your system work seemingly faster and use backend data you can try following:
Optimistic response. This technique uses some simplified parts of backend logic in front end, while doing actual request. So user will see result before backend data reaches browser. Lets say you are doing +1 operation on backend. User sends number 2 to perform this operation. So in your front end you can use something const optimisticResponse = (userData) => userData + 1. And then when you get data from backend you can overwrite value, of needed
GraphQL allows you reduce overhead by asking backend only for data you need.
I'm making a React app that needs a pretty big (about 1 or 2 MB) json file and I'm trying to figure out how to include the data in a way that will minimize loading time for the user. I'm pretty new at webpack but so far I see two options:
Add the data to the React source and import it into the jsx
Put the json in the static file directory and fetch it within the jsx
One other constraint is that multiple pages will be loading the same data, so I was thinking that maybe fetching would be better since the user would have the json cached after the first load.
I'm still pretty new to this and I might be missing something big so I appreciate any info you could give.
Importing a JSON file at build time to bundle it with your code is certainly possible. However I would say keep the JSON as a separate file and fetch it with AJAX. A few reasons why:
With caching, if you bundle it with your JS file, any time you make an incremental change to your code you need to re-bundle your code and JSON, causing your users to unnecessarily re-download a 1-2 MB file just to get the code updates even if the JSON part hasn't changed. If the files are separate, the browser can cache each independently and only re-download when there's a change.
What if users don't need the JSON? Is it 100% necessary in every use-case? Keeping the JSON separate means you can load it only at the actual time it's needed for use instead of preemptively.
You mentioned needing the JSON on multiple pages - if it is cached, theoretically they will download it only once even if it's needed across multiple pages.
You may want to read up on how to leverage caching so that your server provides the proper headers for browsers to effectively utilize caching.
There is an application on React/Redux, which has the functionality to load images through FileReader. After the image is loaded, I get it as dataUrl. So, where is it better to save this data to use it in several React components? To store a relatively large amount of data in the Redux Store seems like a bad idea. At the same time, if the image data is saved somewhere else, the idea "one source of true" breaks. Does anyone have any suggestions?
I'd suggest creating an Object URL instead of a dataUrl, then saving that in the store, as it won't be a huge string anymore. Don't forget to revoke them when you don't need them anymore if possible.
I want to make a forums in Angular JS for which I want to make few JSON data files where I can store and get data from. Some of the problems to keep in mind.
I want to add new data from the browser not adding it to the data file directly. Thank you for the assistance
If you're just dealing with small amounts of data, JSON is fine, but if your datasets get large, definitely switch to a database like MongoDB or DynamoDb.
In the browser you can use LocalStorage to store your JSON payloads and you can access using a plugin like lawnchair:
http://brian.io/lawnchair/