How do I leverage immutability in my redux-react application? - reactjs

I see in redux whenever there is a state change, a new state is created, instead mutating the old one. I want to know how creating a new object every time is beneficial for us. One thing I read it will help in debugging since all the states will be present from the start of the app in browser so we can switch to any state we want. But what about the memory, storing all the stores would eat memory, right? Correct me please. And how would we leverage immutability to increase the performance of our app. I am new to react and redux. And I'm not able to find out the exact answer. Please help me in very simple words. :)

But what about the memory, storing all the stores would eat memory
First of all, in redux there's a single store only, second of all, given the fact that store only keeps primitive data types, memory overhead for even very complex application is so small you don't really have to worry about it. You'll run into much bigger problems sooner than the memory, and when you do, you deal with it, but not sooner - that would be pre-optimization. Some more info from the redux FAQ http://redux.js.org/docs/FAQ.html#performance-state-memory
and how would we leverage immutability to increase the performance of our app
The concept is trivial, since React bases its rendering of the comparison if something has changed in the tree, comparison operations need to be as fast as possible. Enters immutability. For example, given two objects, you don't need to go key by key to determine it's same or not (e.g. Angular 1.x works this way), you just compare obj1 === obj2 and boom, you're done. If two objects point to the same address in memory (behind the scenes) they're equal, otherwise they're not.

Related

React optimum useEffects

I'm currently working on a single page dapp for lucky bunny farm, it all works and will be released soon. But i'm wondering if maybe i've engineered it entirely wrong because i have put most of the business logic within a collection of useEffects (6). Because i dont have any other pages on the site i've not created hooks as i would in other situations because there's no benefit from a writing code point of view.
It seems to me that having 6 useEffects working off the state variables i've set might be a little much, is there a performance gain from reducing them or doing something else? I have created components for the various on page assets and those internally have their own state and useEffect hooks too. What's the danger in a "large" number of these hooks and at what point should i consider optimising for performance.
Or am i over thinking this and actually this is quite normal and i should only worry when i have say 20 hooks all doing their own thing.
Additionally, i tend to use state for single variables instead of objects, is there a performance gain from using a single state with an object.
So to use an example i have the following states:
dappActionStarted
dappActionCompleted
dappActionError
Would it be more efficient to have something like:
{
started
completed
error
}
And then do a single useEffect for that object as opposed to the three i would need to manage 3 seperate state vars.

In React, what is a better place to store big data arrays?

I'm building an interactive plot in React where the user can select multiple variables and add them to a single plot. Each of these "variables" represent scientific data and are stored as arrays with 100k to 3 million elements.
I don't feel very inclined to store these variables in the React state as their contents never change and I'd prefer to control when the UI needs to re-render manually instead of letting React automagically traverse these arrays trying to find out if they have changed, which renders my application unresponsive.
Is there any preferred way of storing this data and sharing it with React? Right now I just store everything in a global variable.
There aren't many places you can store it though. If the data never changes, there's no downside in storing it in state, no changes means no re-renders. It sounds like there's quite a bit of it, so you can't store it in local storage due to limits. If you build a data set inside React by doing a bunch of API calls, revisit the API itself - it should return the data you care for, rarely a bunch of data segments you have to splice together yourself.

Handle large data and drawings - React Redux Immutable

I'm developing a software which is drawing some elements on the screen which is using by mechanical engineers.
I'm string my project data in reducer store. This project data has tons of objects, arrays etc. I mean for each element on the screen, there is a data stored in project.
When user makes an action, I must recalculate project and set it to redux store again for example;
...
case SET_ACTIVE_UNIT:
let unit = action.unit;
project = state.get('project').toJS(); //I'm using immutable
project = ProjectLogic.addActiveUnit(project, unit, action.shiftKey);
return state.set('project', fromJS(project));
...
Ok, you will say that this kind of usage is not right. Because I'm reading all data and reseting it to reducer whole data. You will advice me to use state.setIn but it is really imposible. Beacuse in addActiveUnit function will recalculate project, %20 of project data will be changed. So, I can't handle this change state.setIn
My problem starts here; if there is 60-80 elements drawing on the screen after return state.set('project', fromJS(project)); rendering performance slows down. Every new items gets it worse.
How can I handle this problem?
Thanks all
As a general observation, toJS() is considered to be the most expensive API in Immutable.js, and should be avoided as much as possible.
My initial advice would be to not use Immutable.js.
Instead, you might want to look at using immer to handle the immutable update logic.
Also, our new redux-starter-kit package uses Immer internally.
Beyond that, I'd suggest doing some profiling to see where exactly the perf bottlenecks actually are.

React Simple Global Entity Cache instead of Flux/React/etc

I am writing a little "fun" Scala/Scala.js project.
On my server I have Entities which are referenced by uuid-s
(inside Ref-s).
For the sake of "fun", I don't want to use flux/redux architecture but still use React on the client (with ScalaJS-React).
What I am trying to do instead is to have a simple cache, for example:
when a React UserDisplayComponent wants the display the Entity User with uuid=0003
then the render() method calls to the Cache (which is passed in as a prop)
let's assume that this is the first time that the UserDisplayComponent asks for this particular User (with uuid=0003) and the Cache does not have it yet
then the Cache makes an AjaxCall to fetch the User from the server
when the AjaxCall returns the Cache triggers re-render
BUT ! now when the component is asking for the User from the Cache, it gets the User Entity from the Cache immediately and does not trigger an AjaxCall
The way I would like to implement this is the following :
I start a render()
"stuff" inside render() asks the Cache for all sorts of Entities
Cache returns either Loading or the Entity itself.
at the end of render the Cache sends all the AjaxRequest-s to the server and waits for all of them to return
once all AjaxRequests have returned (let's assume that they do - for the sake of simplicity) the Cache triggers a "re-render()" and now all entities that have been requested before are provided by the Cache right away.
of course it can happen that the newly arrived Entity-s will trigger the render() to fetch more Entity-s if for example I load an Entity that is for example case class UserList(ul: List[Ref[User]]) type. But let's not worry about this now.
QUESTIONS:
1) Am I doing something really wrong if I am doing the state handling this way ?
2) Is there an already existing solution for this ?
I looked around but everything was FLUX/REDUX etc... along these lines... - which I want to AVOID for the sake of :
"fun"
curiosity
exploration
playing around
I think this simple cache will be simpler for my use-case because I want to take the "REF" based "domain model" over to the client in a simple way: as if the client was on the server and the network would be infinitely fast and zero latency (this is what the cache would simulate).
Consider what issues you need to address to build a rich dynamic web UI, and what libraries / layers typically handle those issues for you.
1. DOM Events (clicks etc.) need to trigger changes in State
This is relatively easy. DOM nodes expose callback-based listener API that is straightforward to adapt to any architecture.
2. Changes in State need to trigger updates to DOM nodes
This is trickier because it needs to be done efficiently and in a maintainable manner. You don't want to re-render your whole component from scratch whenever its state changes, and you don't want to write tons of jquery-style spaghetti code to manually update the DOM as that would be too error prone even if efficient at runtime.
This problem is mainly why libraries like React exist, they abstract this away behind virtual DOM. But you can also abstract this away without virtual DOM, like my own Laminar library does.
Forgoing a library solution to this problem is only workable for simpler apps.
3. Components should be able to read / write Global State
This is the part that flux / redux solve. Specifically, these are issues #1 and #2 all over again, except as applied to global state as opposed to component state.
4. Caching
Caching is hard because cache needs to be invalidated at some point, on top of everything else above.
Flux / redux do not help with this at all. One of the libraries that does help is Relay, which works much like your proposed solution, except way more elaborate, and on top of React and GraphQL. Reading its documentation will help you with your problem. You can definitely implement a small subset of relay's functionality in plain Scala.js if you don't need the whole React / GraphQL baggage, but you need to know the prior art.
5. Serialization and type safety
This is the only issue on this list that relates to Scala.js as opposed to Javascript and SPAs in general.
Scala objects need to be serialized to travel over the network. Into JSON, protobufs, or whatever else, but you need a system for this that will not involve error-prone manual work. There are many Scala.js libraries that address this issue such as upickle, Autowire, endpoints, sloth, etc. Key words: "Scala JSON library", or "Scala type-safe RPC", depending on what kind of solution you want.
I hope these principles suffice as an answer. When you understand these issues, it should be obvious whether your solution will work for a given use case or not. As it is, you didn't describe how your solution addresses issues 2, 4, and 5. You can use some of the libraries I mentioned or implement your own solutions with similar ideas / algorithms.
On a minor technical note, consider implementing an async, Future-based API for your cache layer, so that it returns Future[Entity] instead of Loading | Entity.

React Redux anti pattern to calculate and store results in store?

I have at least two approaches to solve a problem.
Repreform calculations inside views on each update (map, filter, find)
Keep an "CurrentState" object inside my redux state.
I choose to create a CurrentState object that stores my calculated result (It calculates the results inside the reducer function).
Approach 2 saves processor calculations, but it feels dirty.
is it considered an antipatern to have a CurrentState object (updated from the reducer) inside the state?
Thank you in advance!
Not totally clear from your question - but I assume that you are talking about situation, that you can compute "result" from other data in your store and asking if storing it (redundantly) in store is good pattern.
Well, generally I would not recommend it as keeping redundant data in store can lead to inconsistent data bugs - if you forgot to keep your data in sync (e.g. when you or someone else working on the project would change one data and forget to change the order place).
The standard patter for this is usually to use selectors and memonization (see https://github.com/reactjs/reselect). The idea is to keep only normalized data in your store but to use selectors in your views. Selectors are responsible to "cache" (memonize) extensive computation. Another advantage of selectors is that it provides level of indirection for your application and you can easily evolve your state (as the project grows) without affecting your whole application. On the other hand for small and one time projects selectors can be just "another" boilerplate - but in general I recommend to use them.
This was a hard anti-pattern.
If you are reading this chances are you don't fully understand redux and I would like to recommend
https://github.com/leoasis/redux-immutable-state-invariant

Resources