I've come across this link that briefly describes the advantages of the Redux selector pattern:
https://gist.github.com/abhiaiyer91/aaf6e325cf7fc5fd5ebc70192a1fa170
They provide an example of calling the filtering inside mapStateToProps:
function mapStateToProps(state) {
return {
incompleteItems: getIncompleteItems(state)
}
}
However, since mapStateToProps is called whenever a state changes, even if the change is completely unrelated to the items list, isn't there a performance penalty in this case? wouldn't it be best to do the filtering inside the reducer?
Thanks.
The above code snippet just mentions that you should extract the "calculations" from your mapStateToProps function to another function.
The other function could be easily testable and the best part for what you mentioned of not recalculating on each cycle.
Now we can reuse this logic across many components mapping this exact state! We can unit test it as well! More importantly we can now memoize this state with reselect
in this case, you can use reselect package
https://github.com/reactjs/reselect
and that will take care of recalculating or n
There would indeed be a wasted re-render if you've bound to a prop which didn't actually change, but your other props did. There are 2 things you can do in this case -
Use shouldComponentUpdate -
You can go ahead and implement a shouldComponentUpdate lifecycle method for your component. Simply checking for the change for the prop and returning false, will give you immediate results.
Use Immutable data structures -
You can ( and should in my opinion ) use immutable date structures for your application. This not only allows you to help better reason about your app, but also with debugging. Using immutable data structure for your local states will allow for advanced memoization patterns and optimizations.
Check out immutable-js by facebook for more details.
State modelling - This is not really a thing we developer think a lot about while coding our super cool apps, but this a worth thing to consider while think in the direction of perf. Keeping re-renders local to the actual data changes is a painful task, and tools such as reselect, immutablejs and side-effect management libs such as redux-saga and redux-thunk, help a lot.
Hope this turns out to be useful for you! Cheer! 😇
Related
I have an Object of data that I store as my context. It gets filled with API responses and I feel like performance-wise I'm doing something wrong.
I tried to look into ImmutableJS but there is no reference of combining ImmutableJS with ContextAPI. Is combining them together doesn't give any benefits? And if it does do have an example of to how develop such a thing?
Whether you use context directly, add redux or pass down props through your component makes no difference to how you use immutable. ImmutableJS simply makes sure that an object does not change unintentionally and that it can be checked for changes with fast reference equality.
The problem you describe could arise from the fact that your context object changes whenever any response data is modified. Therefore it would trigger a rerender on all context consumers (see caveats). I like Immutable and use it in a large project, but if I am correct, it won't make a difference, you would have to fix the root cause or handle changes with shouldComponentUpdate.
I suggest you look into redux (or some other context provider) or show us code.
Don’t try solving your problem by introducing a dependency. It won’t be a magic bullet.
From what you describe, your context object changes frequently and is causing your component tree to rerender too often.
Maybe try creating multiple contexts that are only relevant for parts of the app. That way rerenders will only happen with subtrees using specific context.
I'm pretty new to Redux and would like to use it my application but I'm stuck at architecture/design phase for the Redux part. Here are my requirements and my suppositions regarding the design.
Application details:
SPA with AngularJS. Other libs used ng-redux, reselect, rxjs.
Component details:
Re-usable grid component to render huge amounts of data.
My idea:
Create a plug-n-play kind of component-based architecture, where all the internal components of the grid are independent of the parent/composing component like search, sort, row, header, cell.
All the components will have their own set of reducer, action, selector, and slice of state from the store.
Because all the components have their own reducers and can be plugged-in on demand, I need them to be lazily registered to the store instead of being accumulated in one place.
Some of the components like search, sort along with having their own state, also can affect other components state. Ex: setting up of query parameters (searchText, sortOrder etc.) to fetch the grid data which would be handled by another component(s).
My thoughts:
For the 1st point, I'm looking into reselect for supplying the dependent slice of state.
For the 2nd point, I'm still confused about which to use combineReducers/replaceReducer for the lazy registration. I feel combineReducers will not fit if I want access to multiple parts of the state.
For the 3rd point, I'm thinking of following approaches:
a. Passing entire state via getState() wherever required to update multiple parts of the state. Though this approach gives me feeling of improper use of Redux.
b. Component A fires its own action which updates their part of the state, then another action is fired for the other component B to update its slice of state. This approach as well feels like breaking the whole idea of Redux, the concept of side-effect could be used here though I don't know how to use it, maybe redux-saga, redux-thunk etc.
NOTE: Use of either of the approaches shouldn't lead to the component knowing about the other components hence whatever has to be done will be done by passing a generic config object like { actionsToFire: ['UPDATE_B'] }.
I need state management while navigating back and forth between the pages of the application, but I don't require hot-reloading, action-replay, or pre-fetching application state from server-side.
Components will also be responsible to destroy their state when no longer required. And state will have a normalized structure.
I know the requirements might seem weird or not-seen-often but I would keep them that way.
Few things I already know are:
I don't need to use Redux like the classic article from Dan says, but I think I need it here in this case.
I know about the Smart and Dumb components, mostly my components might seem smart (i.e aware of application state) but that is how I want to keep them, I might be wrong.
Diagram of the grid component:
Grid Component Diagram
Redux's global store makes encapsulation and dynamic plug-and-play behavior more difficult, but it is possible. There's actually many existing libraries for per-component-instance state and dynamic registration of reducers. (That said, the libraries I've seen thus far for component management are React libraries - you'd have to study some of those and reimplement things yourself for use with Angular.)
When passing data between two elements that are very far away from each other in the hierarchy of components, passing data through props can be tedious. In these use cases I've resorted to using Redux just because it is less to keep track of when there is a large amount of components.
What I've done in one little project is to use a closure to encapsulate state and export that variable and consume it elsewhere. I feel this is a an antipattern but it does work.
The way it works is by declaring some variable that is going to be modified within a component. This same variable is the imported from elsewhere and consumed from elsewhere.
Here is a small sample with what I am doing (just pretend there is a large component hierarchy): https://codesandbox.io/s/2R9RvYkN1
So my questions are: is there a better way to achieve the same results? Should we use a Flux implementation for these use cases? Is it ok to just pass props around through a large hierarchy of components?
As you stated yourself, redux solves this problem by providing an "App state" that's global to your app, which allows you to connect any component you want to that state.
Your "closure" is merely a poor-man's Redux, it's also a global state, only it lacks any of the features provided by Redux(unless you write them specifically).
let's CompA needs to re-render based on a click event on CompB, how do you do that automatically with a closure? you'd have to add listeners, check if a relevant state was changed and then re-render.
all these things are done for free by Redux, so I don't see any added benefit(except for not using redux, which can be a benefit in it's own).
If it's that important not to use redux, this can be "fine", yet very dangerous and I'd say it won't scale well.
I've been using Flux first and Redux later for a very long time, and I do like them, and I see their benefits, but one question keeps popping in my mind is:
Why do we decouple actions and reducers and add extra indirections between the call that will express the intent of changing the state (action) and the actual way of changing the state (reducer), in such a way that is more difficult to provide static or runtime guaranties and error checking? Why not just use methods or functions that modify a state?
Methods or function will provide static guaranties (using Typescript or Flow) and runtime guaranties (method/function not found, etc), while an action not handled will raise no errors at all (either static or runtime), you'll just have to see that the expected behavior is not happening.
Let me exemplify it a little better with our Theoretical State Container (TSC):
It's super simple
Think of it as React Component's state interface (setState, this.state), without the rendering part.
So, the only thing you need is to trigger a re-render of your components when the state in our TSC changes and the possibility to change that state, which in our case will be plain methods that modify that state: fetchData , setError, setLoading, etc.
What I see is that the actions and the reducers are a decoupling of the dynamic or static dispatch of code, so instead of calling myStateContainer.doSomethingAndUpdateState(...) you call actions.doSomethingAndUpdateState(...), and you let the whole flux/redux machinery connect that action to the actual modification of the state. This whole thing also brings the necessity of thunks, sagas and other middleware to handle more complex actions, instead of using just regular javascript control flows.
The main problem is that this decoupling requires you to write a lot of stuff just to achieve that decoupling:
- the interface of the action creator functions (arguments)
- action types
- action payloads
- the shape of your state
- how you update your state
Compare this to our theoretical state container (TSC):
- the interface of your methods
- the shape of your state
- how you update your state
So what am I missing here? What are the benefits of this decoupling?
This is very similar to this other question: Redux actions/reducers vs. directly setting state
And let me explain why the most voted answer to that question does not answer either my or the original question:
- Actions/Reducers let you ask the questions Who and How? this can be done with the our TSC, it's just an implementation detail and has nothing to do with actions/reducers themselves.
- Actions/Reducers let you go back in time with your state: again this is a matter of implementation details of the state container and can be achieve with our TSC.
- Etc: state change orders, middleware, and anything that is currently achieved with actions/reducers can be achieved with our TSC, it's just a matter of the implementation of it.
Thanks a lot!
Fran
One of the main reasons is that constraining state changes to be done via actions allows you to treat all state changes as depending only on the action and previous state, which simplifies thinking about what is going on in each action. The architecture "traps" any kind of interaction with the "real world" into the action creator functions. Therefore, state changes can be treated as transactions.
In your Theoretical State Container, state changes can happen unpredictably at any time and activate all kinds of side effects, which would make them much harder to reason about, and bugs much harder to find. The Flux architecture forces state changes to be treated as a stream of discrete transactions.
Another reason is to constrain the data flow in the code to happen in only one direction. If we allow arbitrary unconstrained state modifications, we might get state changes causing more state changes causing more state changes... This is why it is an anti-pattern to dispatch actions in a reducer. We want to know where each action is coming from instead of creating cascades of actions.
Flux was created to solve a problem at Facebook: When some interface code was triggered, that could lead to a cascade of nearly unpredictable side-effects each causing each other. The Flux architecture makes this impossible by making every state transition a transaction and data flow one-directional.
But if the boilerplate needed in order to do this bothers you, you might be happy to know that your "Theoretical State Container" more or less exists, although it's a bit more complicated than your example. It's called MobX.
By the way, I think you're being a bit too optimistic with the whole "it's an implementation detail" thing. I think if you tried to actually implement time-travel debugging for your Theoretical State Container, what you would end up with would actually be pretty similar to Redux.
I read about React being very fast. Recently, I wrote an app to test react against angular. Unfortunately I found react performing slower than angular.
http://shojib.github.io/ngJS/#/speedtest/react/1
Here is the source code for react. I'm very new to react. I am sure I'm doing something wrong with my react code here. I find it unusually slow.
https://jsbin.com/viviva/edit?js,output
See if any react experts can find the bottlenecks.
Update 1:
Removed the usage of context.
Better usage of setState.
Better usage of shouldComponentUpdate.
I still can't make it faster than angular or even close to it.
https://jsbin.com/viviva/96/edit?js,output
Update 2:
I made a big mistake by creating 2d arrays in the cell component. So I moved them to a mixin. Now I believe that react is faster than angular in DOM manipulations.
https://jsbin.com/nacebog/edit?html,js,output
Update 3:
My mistake again. I did it wrong which made it faster. After analysis, it was rendering incorrectly. If anyone can help me understand, if this could be any faster. I know react is not good at dealing large arrays. In this scenario, it's dealing with four 3d arrays. https://jsbin.com/viviva/100/edit?html,css,js
React's performance is exaggerated. It's fast enough for most real use cases. Rendering large lists is its main weakness.
Also this always returns true (unless the update is triggered by setState). You need to shallow compare the props.
shouldComponentUpdate: function(nextProps, nextState) {
return this.props !== nextProps;
}
And you should be using props in the places you're using context.
This is a very contrived example in my opinion.
As stated above, you are using context incorrectly.
There is no need for a mixin: the number of columns and rows can and should be passed down as props. create2DArray, getRandomNumber should be declared at the top of your script as simple global functions.
You are setting the state incorrectly. You should never change the state like this this.state.some = 'whatever', you have to use setState
this.setState({ some: 'whatever' });
You're using context incorrectly, the documentation states:
In particular, think twice before using it to "save typing" and using
it instead of passing explicit props.
It's mostly beneficial for passing context objects like currently logged in user or perhaps a redux store. Your app is using context when it should be using props.
You'll need to ensure that shouldComponentUpdate is a meaningful predicate. See https://facebook.github.io/react/docs/advanced-performance.html
If you correct those two things it'll be a better measure of React's performance compared to Angular. At the moment you've got a well tuned Ferrari running against a model T Ford (held together with chewing gum).