Accessing global state from a slice - reactjs

I'm using #reduxjs/toolkit createSlice feature for the whole application.
I often have a need to access other slices's state. One common example is for the "one to many relationship" where children are stored in normalized form by "byParentId" key in their respective state. So, when user makes some "parent" active, almost every selector / reducer / saga effect in the "child" needs access to "parent" state "active" field. Initially, I simply added "activeParent" field to the action while combining reducers. Later on, with more such cases I ended up with just "global" variable in action with the whole state for every action instead of crafting data preparation in the combine reducers function.
This also improved performance in redux-saga, where yield select(selector) calls where replaced with synchronous selector(global)
Here #gaeron claims this approach to be an anti-pattern, which can be usually solved by:
Removing that logic from reducer and moving it to a selector
Good, when possible.
Passing additional information into the action;
Sometimes it is good, but often introduces unnecessary performance hit.
To pass additional information to an action it should be retrieved by useSelector into component which may otherwise don't need it, read: more redraws.
Letting the view code perform two actions.
Again, good, but not always. It requires to put sequences of actions in multiple components instead of having a simple logic,
when one action results in another one.
One mentioned problem:
reducers become coupled to each other’s state shape which complicates any refactoring or change in the state structure.
can be easily avoided by using selectors instead of accessing "foreign" state directly.
Is there a real reason for this approach to be anti-pattern?

Putting the entire root state into an action is definitely an anti-pattern and should be avoided.
You may want to sometimes put additional data from the state into specific actions if the reducer logic needs it.
Beyond that, I'd need to see more specific details on what you're trying to do and what the code looks like to offer additional advice.

Related

In Redux if all reducers are called when an action fires, then what's preventing possible naming conflict among actions?

Each big components get its own list of actions. Separation of files implies that they are isolate actions. But from my understanding if a type of an action matches the type of another action in completely separate file meant for different reducer will still cause problem.
EDIT:
if i have two sections in the app. One has reducer for action SET_SCROLL, and other section has that too. If i update scroll position in section 2, by firing SET_SCROLL. This would cause section 1's state to change. Now imagine 100s of actions, how do you prevent naming conflicts? I understand in redux you can't associate set of actions with a certain reducers.
Yes, this is why you should be careful when defining action types. The "Reusing Reducer Logic" docs page gives examples of how this can be a problem when you want to reuse a given reducer in more than one place, and shows some ways around that.
We specifically recommend defining action types as "domain/someAction" to help avoid clashes.
Note that it's also possible (and recommended) to have many different parts of the reducer logic all independently respond to the same dispatched action

Redux. Putting most of logic in ActionCreators for decreasing reducer overheads

Question about Redux data flow.
Let's talk about huge enterprice application. Dozens of modules, complex hierarchy of reducers, hundreds of action-types.
Simple flow:
Control dispatches an action(for example, input - typing). This action go through every reducer, go through hundreds of switch-cases, and new state is merged from all of reducers with minimal changes. I think, that we have huge unnecessary overhead in this scenario.
What options we can use to decrease overhead?
Use isolated high-level sub-apps with their own provider.
This option will decrease overheads. But if we need any common features in sub-apps, like account info/notifications/etc, we should duplicate it.
Use asyncReducers for code-splitting.
This option, also will decrease overheads, but it is not recommended.
Make a reducers with action-filters. In this case, we add additional information to each action, which reducer should process it.
This option also decrease number of swithces and complexity of newState merging.
But in option 3 I can't understand one thing.
We have control, which is connected to concrete part of state.
99% of actions are processed by single reducer.
Each action at reducer is processed by "case function", which is moved to separate logic-module.
We have action-creator which knows concrete reducer which process this action, and has access to global state.
Why action-creator should dispatch global action, which then will filtrated by cascade of reducers, and then will be swithed through dozens of cases, with not optimal merging of new global state?
Why can't we call case-function in action-creator, compute new global state in optimal way and dispatch it as payload with "SET_GLOBAL_STATE" action type?
I understand that it is anti-pattern. But i can't understand what we lose in this case.
I'll be glad, if someone will explain me what is wrong.
Several thoughts here.
First, per the Redux FAQ entry on splitting logic between action creators and reducers, it's up to you where to put the bulk of your logic. If you prefer to do the actual calculations for what a piece of the new state should be in an action creator, and just have a given slice reducer do return {...state, ...action.payload}, you can.
Second, per the Redux FAQ entry on the performance of calling reducer functions, the cost of calling many reducer functions will generally be very minimal, since most of them will just check the action type and return their existing slice of state. So, the reducer logic will rarely be a performance bottleneck.
Third: yes, it's entirely possible to calculate the entire new app state in an action creator, and have a single generic "SET_GLOBAL_STATE" action. This will allow use of Redux's middleware and time travel debugging, etc. However, this also throws away many of the possible benefits and use cases for Redux. I discussed a lot of the intended purpose for Redux in my blog posts The Tao of Redux, Part 1 - Implementation and Intent and The Tao of Redux, Part 2 - Practice and Philosophy.
To summarize the reasons why we discourage the idea your describing: Redux is intended to make it easy to trace when, why, and how a certain piece of state was updated. While it's up to you to decide how granular you want your actions to be, ideally they should be semantically meaningful. That way, when you read the action history log, you can actually understand the sequence of actions that were dispatched. Also, if you search for a given action type, you should be able to quickly narrow down exactly where in the codebase that action is used. If you only have a single "SET_GLOBAL_STATE" action, it will be almost impossible to determine what part of the code caused a given state update.

When and why should I combine Reduces in Redux?

I have watched a few videos about reducers, and they all claimed that I should use only one reducer and state for my whole project and I should combine every reducer into one big reducer.
Now my question is, why would I do this? Imagine I have a big application and I combine all reducers. My combined reducer would be huge and a single state change would take quite long since we need to check every single reducer slice.
Should I really just use one reducer for a bigger project? Why should we combine them, instead of creating multiple stores, and what about the performance?
As your app grows more complex, you'll want to split your reducing
function into separate functions, each managing independent parts of
the state.
.combineReducers(...)
Thanks a lot.
Per the Redux FAQ entry on the performance of "calling all reducers":
It's important to note that a Redux store really only has a single reducer function. The store passes the current state and dispatched action to that one reducer function, and lets the reducer handle things appropriately.
Trying to handle every possible action in a single function does not scale well, simply in terms of function size and readability, so it makes sense to split the actual work into separate functions that can be called by the top-level reducer.
However, even if you happen to have many different reducer functions composed together, and even with deeply nested state, reducer speed is unlikely to be a problem. JavaScript engines are capable of running a very large number of function calls per second, and most of your reducers are probably just using a switch statement and returning the existing state by default in response to most actions.
Also, from the FAQ entry on whether you should create multiple stores:
It is possible to create multiple distinct Redux stores in a page, but the intended pattern is to have only a single store. Having a single store enables using the Redux DevTools, makes persisting and rehydrating data simpler, and simplifies the subscription logic.
However, creating new stores shouldn't be your first instinct, especially if you come from a Flux background. Try reducer composition first, and only use multiple stores if it doesn't solve your problem.

Is it bad practice to do calculations and data tweaking in the saga in react?

I have a saga which runs once every 10 seconds on a POLL action to updated the current state on my GUI.
when a POLL happens I need to make a few calls to walk down the rest interface to get to find the components I care about. There will be a total of 1-5 components, for each of these I need to make a separate rest call for Foo and Bar elements of the components.
Then at some point I need to do some summations, combining the Foo and Bar data together to have the structure expected by my table for listing components, calculating some totals across all components in my dashboard etc. None of the work is cpu intensive, but it adds up to a decent bit of code since I have so many things that need tweaked.
Currently I'm doing all of this in the Saga, but I'm not sure if this is considered bad practice? I feel like reducers are the general 'go to' place for data tweaking, but it feels odd throwing an action with such a large payload, all the responses from every call in a saga, since much of the rest response is data I don't care about. I also like doing all the processing in the saga so I can decide at the end of everything rather to pass an error action to show an error to the user or pass a success action which clears any previous errors, some of the decision for rather I want to clear the action requires more processing of the data.
My only concern is that the generator is getting rather large, with lots of helper methods that feel a little out of place in a saga class to do the processing (their need to be moved to a utils class no matter what I think). The processing isn't too expensive and I am using generators so I don't think the processing will have a noticeable affect on saga's 'threading'. Still, If there is a recommended best practice I want to stick to that. Am I breaking from standard practices doing all of my tweaking of the data in my saga and sending to the reducer a per-formatted object for it to store into the state without any other processing?
This is really a specific case of a common question that is addressed by the Redux FAQ on "where should my business logic live?". Quoting that answer:
Now, the problem is what to put in the action creator and what in the reducer, the choice between fat and thin action objects. If you put all the logic in the action creator, you end up with fat action objects that basically declare the updates to the state. Reducers become pure, dumb, add-this, remove that, update these functions. They will be easy to compose. But not much of your business logic will be there. If you put more logic in the reducer, you end up with nice, thin action objects, most of your data logic in one place, but your reducers are harder to compose since you might need info from other branches. You end up with large reducers or reducers that take additional arguments from higher up in the state.
There's nothing wrong with having logic on the "action creation" side (whether it be in components, thunks, sagas, or middleware) that does a lot of work to prepare and format data, and having the reducer simply store what was included in the action. On the flip side, having more logic on the reducer side can mean that time-travel debugging will re-run more of your actual code, giving you more chances to edit and retry behavior.
Overall, it sounds like what you're doing is perfectly reasonable.

Why we decouple actions and reducers in Flux/Redux architecture?

I've been using Flux first and Redux later for a very long time, and I do like them, and I see their benefits, but one question keeps popping in my mind is:
Why do we decouple actions and reducers and add extra indirections between the call that will express the intent of changing the state (action) and the actual way of changing the state (reducer), in such a way that is more difficult to provide static or runtime guaranties and error checking? Why not just use methods or functions that modify a state?
Methods or function will provide static guaranties (using Typescript or Flow) and runtime guaranties (method/function not found, etc), while an action not handled will raise no errors at all (either static or runtime), you'll just have to see that the expected behavior is not happening.
Let me exemplify it a little better with our Theoretical State Container (TSC):
It's super simple
Think of it as React Component's state interface (setState, this.state), without the rendering part.
So, the only thing you need is to trigger a re-render of your components when the state in our TSC changes and the possibility to change that state, which in our case will be plain methods that modify that state: fetchData , setError, setLoading, etc.
What I see is that the actions and the reducers are a decoupling of the dynamic or static dispatch of code, so instead of calling myStateContainer.doSomethingAndUpdateState(...) you call actions.doSomethingAndUpdateState(...), and you let the whole flux/redux machinery connect that action to the actual modification of the state. This whole thing also brings the necessity of thunks, sagas and other middleware to handle more complex actions, instead of using just regular javascript control flows.
The main problem is that this decoupling requires you to write a lot of stuff just to achieve that decoupling:
- the interface of the action creator functions (arguments)
- action types
- action payloads
- the shape of your state
- how you update your state
Compare this to our theoretical state container (TSC):
- the interface of your methods
- the shape of your state
- how you update your state
So what am I missing here? What are the benefits of this decoupling?
This is very similar to this other question: Redux actions/reducers vs. directly setting state
And let me explain why the most voted answer to that question does not answer either my or the original question:
- Actions/Reducers let you ask the questions Who and How? this can be done with the our TSC, it's just an implementation detail and has nothing to do with actions/reducers themselves.
- Actions/Reducers let you go back in time with your state: again this is a matter of implementation details of the state container and can be achieve with our TSC.
- Etc: state change orders, middleware, and anything that is currently achieved with actions/reducers can be achieved with our TSC, it's just a matter of the implementation of it.
Thanks a lot!
Fran
One of the main reasons is that constraining state changes to be done via actions allows you to treat all state changes as depending only on the action and previous state, which simplifies thinking about what is going on in each action. The architecture "traps" any kind of interaction with the "real world" into the action creator functions. Therefore, state changes can be treated as transactions.
In your Theoretical State Container, state changes can happen unpredictably at any time and activate all kinds of side effects, which would make them much harder to reason about, and bugs much harder to find. The Flux architecture forces state changes to be treated as a stream of discrete transactions.
Another reason is to constrain the data flow in the code to happen in only one direction. If we allow arbitrary unconstrained state modifications, we might get state changes causing more state changes causing more state changes... This is why it is an anti-pattern to dispatch actions in a reducer. We want to know where each action is coming from instead of creating cascades of actions.
Flux was created to solve a problem at Facebook: When some interface code was triggered, that could lead to a cascade of nearly unpredictable side-effects each causing each other. The Flux architecture makes this impossible by making every state transition a transaction and data flow one-directional.
But if the boilerplate needed in order to do this bothers you, you might be happy to know that your "Theoretical State Container" more or less exists, although it's a bit more complicated than your example. It's called MobX.
By the way, I think you're being a bit too optimistic with the whole "it's an implementation detail" thing. I think if you tried to actually implement time-travel debugging for your Theoretical State Container, what you would end up with would actually be pretty similar to Redux.

Resources