Redux + Reselect: Preventing selector recalculation when GPS data updates - reactjs

Recently, I've been working on a very large application using React+Redux with Reselect to memoize data and prevent unnecessary re-renders, and I've hit a specific problem that I can't seem to get past.
In Redux state, I am storing a very large amount of data as an object indexed by id. These objects all have a property on them (let's call it gps) that updates with realtime gps coordinates.
This data is being used in two ways. The first is on a map, where the GPS data is relevant. The second is in the UI, where the GPS data is not relevant. Any time the GPS data is updated on any of the objects, that object is streamed in and replaced in the Redux store, which updates the reference of that object in my selector.
Example Redux Store:
data: {
dogs: {
1: {id: 1, name: "name1", gps: [123, 234]},
2: {id: 2, name: "name2", gps: [123, 234]},
3: {id: 3, name: "name3", gps: [123, 234]},
4: {id: 4, name: "name4", gps: [123, 234]}
}
}
The data in, for example, state.dogs[1].gps might updated 3 to 5 times per second. This can occur in any of the objects in state.data.dogs.
Selectors are written as follows:
const dogDataSelector = state => state.data.dogs;
const animalsSelector = createSelector(
dogDataSelector,
(dogs) => {
return Object.keys(dogs).map(id => {
return dogs[id];
})
}
)
Now, this code works correctly when I want all the dogs, as they update, GPS included.
What I can't seem to figure out would be how to write a selector specifically for the UI that excludes the GPS updates. 99 times out of 100, when a dog updates, it is the GPS updating. The UI doesn't care about the GPS at all, but due to that, the selector sends new data forward which causes the UI to rerender.
I know it is possible to write a new stream that only pushes changes from the DB if the id or name of a dog changes, but this is a solution I am hoping to stay away from, as it will cause a lot of the same data to be stored multiple times in the store.
I have tried the following:
creating a selector that returns a pared-down version of the dogs, with only the id and the name, stored in an object by keys. This selector is used as an input-selector later, but still causes unnecessary selector returns.
creating a selector that only returns an array of dog ids, and passing this to the UI. A deep equality check is used to prevent re-renders, and the array of ids is used to pluck specific dog objects from state. This is not an ideal solution and seems bad.
If anyone needs any more information or clarification, don't hesitate to ask.
Update: One of the main reasons this is an issue is that any GPS update to any dog will cause the dogDataSelector to return a new reference. This will, in turn, cause animalsSelector fire an update and return a new value.

The standard approach for immutable data updates requires that if a nested field is updated, all of its ancestors in the tree should be copied and updated as well. In your example, an update to dog[3].gps would require new references for the gps array, dog[3], dogs, and data. Because of that, with this data structure, any updates to a gps field must result in new references all the way up the chain, and so the UI would see the new references and assume it needs to re-render.
A couple possible suggestions:
Write a selector that looks up a dog entry by its ID, strip out the gps field, and then do some kind of shallow equality check against the prior value to see if any if the non-gps fields have actually changed, so that a new "dog entry minus gps" object is only returned when one of those fields is different.
Store the GPS values in a separate lookup table keyed by ID, rather than nested inside the dog entries themselves, so that updates to a GPS array don't result in new dog entry references.

NOTE: THIS WON'T WORK FOR REASONS DETAILED IN THE COMMENTS
If you don't need the gps data, then surely you can just strip it off, and reselect will ignore it
Something like:
const dogDataSelector = state => state.data.dogs;
const animalsSelector = createSelector(
dogDataSelector,
(dogs) => {
return Object.keys(dogs).map(id => {
return dogs[id];
})
},
(dogsArray) => dogsArray.map(({id, name}) => ({id, name}))
)

Related

Improvements to Recently Viewed Component in React

I have this small app I created using a REST Countries API.
https://rest-country-react.netlify.app/
If you click on one country card then it is displayed under the "Recently Viewed" header. So far it works fine, but I wanna tune it a little bit. What I thought I'd do:
#1 Add a limit of three recently viewed countries, so basically if the user clicks on 4,5,6 countries, only the three most recent clicked countries are displayed.
#2 Visited countries are currently sorted in an "oldest" to "newest" order. I wanna reverse that so the newest gets the first spot, then the second newest, then the third and so on.
I am stuck because I am not sure how to implement these tweaks. For the first one I thought I'd filter the state array before mapping it in the component, saying something like... if index > 2, filter it out the element.
But for the second, I haven't found a solution yet. Maybe instead of using concat() method, I should use unshift()? From what I read in the React documentation, it's not advised to directly edit the state or its array, so I don't know what to do.
onCountryClick(country) {
const uniqueRecent = [
...new Set(this.state.recentlyViewed.concat(country)),
];
this.setState({
// ... other state updates here
recentlyViewed: uniqueRecent
});
}
There are actually multiple solutions, but let's take the one which is very clean and understandable:
onCountryClick(country) {
const { recentlyViewed } = this.state;
const newState = {}; // add your other updates here
if (!recentlyViewed.includes(country)) {
// firstly take first two items as a new array
newState.recentlyViewed = recentlyViewed.slice(1);
// add country into beginning of new array
newState.recentlyViewed.unshift(country);
}
this.setState(newState);
}
Firstly we check if country already exists in the recentlyViewed array and if no - then continue.
We must use new array when updating the state, so simply calling unshift() method will not work for us as it modifies original array and doesn't return new one. Instead, we firstly call .slice() method which solves two main things for us: it takes a portion of original array (in our case items with index 0 and 1) and return new array with them. Great! Now we can easily use unshift to add country into beginning of new array. Then simply update the state.
With this solution you always get a new array with max of 3 countries, where the first item is the newest.

Rendering optimization using nested state in React

I have a React app where I believe -though not certain- that "nested state" causes me delays.
Here's the thing:
I keep my state in a variable called dataset.
dataset is an array of objects like this:
(5) [{…}, {…}, {…}, {…}, {…}]
and each object has the following structure:
{id: '5', name: 'Bob', url: 'http://example.com', paid: 'yes'}
Finally my render method displays the data in a table:
...
{dataset.map((entry) => (
...
<tr id={entry.id}><td>{entry.name} has paid: {entry.paid}</td></tr>
...
))}
It all seems to be working fine but then whenever I need to update the state because presumably user number 5 didn't pay this month I do this:
// copy current state
let new_dataset = this.state.dataset.slice();
// modify the copy
new_dataset[5].paid = !this.state.dataset[5].paid;
// replace old state with the modified copy
this.setState({
dataset: new_dataset
});
However here's the problem:
Will React update all 5 rows or just the row holding the object I modified?
Wouldn't it be a waste to have react update 10000 rows just for a small change in one row? What's the best practice for such scenarios?
Thanks in advance.
P.S. I have already looked at how to Avoid Reconciliation but I'm not sure how to apply in this case.
Will React update all 5 rows or just the row holding the object I modified?
It will only update the DOM with what's been changed, however it will still iterate through all your items in your dataset array. The iteration as well as the reconciliation process itself would take time if there are that many items in your array.
So the problem here isn't really the implementation per se - it's a very common approach for small to medium sized arrays. However, if you indeed have large entries in your array, you may notice performance hits, in which case you might want to find a different implementation.
That said, I did find a minor error in your code:
You correctly create a copy of your array with this.state.dataset.slice(); however you are not creating a copy of the object that you are mutating. So essenatially you are mutating the state directly. So instead do:
// copy current state
let new_dataset = this.state.dataset.slice();
// copy the object
let obj = Object.assign({}, new_dataset[5]);
// modify the copy
obj.paid = !obj.paid;
// replace the array item with the new object
new_dataset[5] = obj;
Will React update all 5 rows or just the row holding the object I
modified?
It will update only what has been changed.
That is the purpose of reconciliation; I am not sure why you were looking for ways to avoid it. However, in your case you should use keys.
Will React update all 5 rows or just the row holding the object I
modified?
No it doesn't react render all the 5 rows again. Rather react optimises on a number of things to not render the same changes again. One such thing is a key attribute. Also React runs a diffing algorithm to render only the changes

Deeply updating React state (array) without Immutable, any disadvantages?

I know using Immutable is a great way to deeply update React state, but I was wondering if there are any drawbacks I'm not seeing with this approach:
Assuming this.state.members has the shape Array<MemberType> where MemberType is { userId: String, role: String }.
If the user changes a user's role, the following method is executed:
changeMemberRole = (userId, event, key, value) => {
const memberIndex = _findIndex(this.state.members,
(member) => member.userId === userId);
if (memberIndex >= 0) {
const newMembers = [...this.state.members];
newMembers[memberIndex].role = value;
this.setState({ members: newMembers });
}
};
Would there be any advantage to replacing this with Immutable's setIn, other than potentially more terse syntax?
The difference between using or not Immutable.js is, of course, immutability¹ :)
When you declare const newMembers = [...this.state.members] you're copying an array of references, this is indeed a new array (modifying a direct child by index like 0,1,2 is not reflected) but filled with the same object references, so inner/deep changes are shared. This is called a shallow copy.
newMembers are not so new
Therefore any changes to any newMembers element are also made in the corresponding this.state.members element. This is fine for your example, no real advantages so far.
So, why immutability?
Its true benefits are not easily observed in small snippets because it's more about the mindset. Taken from the Immutable.js homepage:
Much of what makes application development difficult is tracking
mutation and maintaining state. Developing with immutable data
encourages you to think differently about how data flows through your
application.
Immutability brings many of the functional paradigm benefits such as avoiding side effects or race conditions since you think of variables as values instead of objects, making it easier to understand their scope and lifecycle and thus minimizing bugs.
One specific advantage for react is to safely check for state changes in shouldComponentUpdate while when mutating:
// assume this.props.value is { foo: 'bar' }
// assume nextProps.value is { foo: 'bar' },
// but this reference is different to this.props.value
this.props.value !== nextProps.value; // true
When working with objects instead of values nextProps and this.props.value will be considered distinct references (unless we perform a deep comparison) and trigger a re-render, which at scale could be really expensive.
¹Unless you're simulating your own immutability, for what I trust Immutable.js better
You're not copying role, thus if one of your components taking the role as prop (if any) cannot take benefit of pure render optimization (overriding shouldComponentUpdate and detecting whenever props have been actually changed).
But since you can make a copy of the role without immutablejs, there is no any effective difference except that you have to type more (and thus having more opportunities to make a mistake). Which itself is a huge drawback reducing your productivity.
From the setIn docs:
Returns a new Map having set value at this keyPath. If any keys in keyPath do not exist, a new immutable Map will be created at that key.
This is probably not what you are looking for since you may not want to insert a new member with the given role if it does not exist already. This comes down to whether you are able to control the userId argument passed in the function and verify whether it exists beforehand.
This solution is fine. You can replace it with update instead, if you want to.

How best to store a number in google realtime model, and get atomic change events?

Sounds pretty simple, however...
This number holds an enumerated type, and should be a field within a custom realtime object. Here's its declaration in the custom object registration routine:
MyRTObjectType.prototype.myEnumeratedType =
gapi.drive.realtime.custom.collaborativeField('myEnumeratedType');
I can store it in the model as a simple javascript number, and initialize it like this:
function initializeMyRTObjectType() {
// other fields here
this.myEnumeratedType = 0;
}
...but the following doesn't work, of course, since it's just a number:
myRTObject.myEnumeratedType.addEventListener(
gapi.drive.realtime.EventType.OBJECT_CHANGED, self.onTypeChanged);
I can add the event listener to the whole object:
myRTObject.addEventListener(
gapi.drive.realtime.EventType.OBJECT_CHANGED, self.onTypeChanged);
But I'm only interested in changes to that number (and if I were interested in other changes, I wouldn't want to examine every field to see what's changed).
So let's say I store it as a realtime string, initializing it like this:
function initializeMyRTObjectType() {
var model = gapi.drive.realtime.custom.getModel(this);
// other fields here
this.myEnumeratedType = model.createString();
}
Now I'll get my change events, but they won't necessarily be atomic, and I can't know whether a change, say from "100" to "1001", is merely a change enroute to "101", and so whether I should react to it (this exact example may not be valid, but the idea is there...)
So the question is, is there either a way to know that all (compounded?) changes, insertions/deletions are complete on a string field, or (better) a different recommended way to store a number, and get atomic notification when it has been changed?
You also get a VALUE_CHANGED event on the containing object like you would for a map:
myRTObject.addEventListener(gapi.drive.realtime.EventType.VALUE_CHANGED,
function(event) {
if (event.property === 'myEnumeratedType') {
// business logic
}
});

In Meteor Collections, use an array-field or another collection?

The question
In short, my question is: when an array in a document is changed, will the users receive the new array, or just the changes?
If that question is unclear, I've described my problem below.
The problem
I have a collection whose documents contain an array field two users will push values to. A document in this collection kind of looks like this:
var document = {
userId1: "...user id...", // The id of the first of the two users.
userId2: "...user id...", // The id of the second of the two users.
data: [] // The field the two users will push values to.
}
data will from the beginning be empty, and the users will then take turns pushing values to it.
When one of the user pushes some value to data, the server will send the changes to the second user. Will the second user receive the entire data-array, or just the changes (the pushed value)? I'm a little bit worried that the second user will receive the entire data-array, even though it's just a single value that's been pushed to it, and if data contains many values, I fear this will become a bottleneck.
Is this the case? If it is, using another collection for storing the values will solve it, right? Something like this:
var document = {
id: "...unique id...",
userId1: "...user id...", // The id of the first of the two users.
userId2: "...user id..." // The id of the second of the two users.
}
var documentData = {
idReference: "...the unique id in the document above...",
value: "...a value..."
}
Instead of pushing the values into an array in document, insert them into a collection containing documentData. This (I know) won't have the downside I fear the first solution has (but I rather use the first solution if it doesn't have the downside).
As per https://github.com/meteor/meteor/blob/master/packages/livedata/DDP.md
changed (server -> client):
collection: string (collection name)
id: string (document ID)
fields: optional object with EJSON values
cleared: optional array of strings (field names to delete)
Users will receive the new array. To only send "diffs," use a collection of {userId: userId, value: value} documents.
I inspected what was sent as commented by user728291, and it seems like the entire array-field is sent, and not just the pushed value. I don't know if this always is the case (I just tested with an array containing few and small values; if it contains many or big values Meteor maybe try to do some optimization I couldn't see in my tiny test), but I'll go with the solution using another collection instead of the array-field.

Resources