First off, I'm still a relative newbie to the world of React & Redux. I'm reading about Normalizing State Shape, and their examples are about storing data by ID. But what if I have data that is keyed on multiple dimensions?
For example, my app is displaying cost data for a given Service ID, which is retrieved from an API. However, the user is able to select a time range. The start and end timestamps are passed to the API, and the API returns the aggregated data over that time range. I want to be able to store all the different time period data in Redux so if the user goes back to a previous time period, that data is already there (our API is slow, so having already loaded data available is critical to user experience).
So not only do I have data keyed by Service ID, but also by Start Time / End Time. Since Redux recommends flat data structures, I wonder how flat should I make it? Because normally, I would store the data like this:
{
costData: {
[service_id]: {
[start_time]: {
[end_time]: {
/* data */
}
}
}
}
}
But that seems to go about the idea of flattening the data. One of my ideas was to generate an ID based on Service ID & Start Time & End Time of the form:
<ServiceID>::<StartTime>::<EndTime>
eg.
00123::1505423419::1505785502
So the data is fairly flat:
{
costData: {
'00123::1505423419::1505785502': {
/* data */
}
}
}
The component can generate this ID and pass it to the fetchCostData() action, which can dispatch and fetch the data and store that data on that generated ID. But I don't know if that's the best approach. Are there any guidelines or recommendations on how to approach this?
I recommend using selectors (Reselect) for this nested data, if you're not comfortable with modifying your api.
-> Selectors are the best approach for computing derived data, allowing Redux to store the minimal possible state.
-> Selectors are efficient. A selector is not recomputed unless one of its arguments change.
-> Selectors are composable. They can be used as input to other selectors.
In addition to the other answer, you may want to read the article Advanced Redux Entity Normalization, which describes ways to track additional lookup descriptions of normalized data. I also have some other articles on normalization in the Redux Techniques#Selectors and Normalization section of my React/Redux links list.
Related
I'm pretty new to GraphQL and Apollo but I've been using Redux along with React for the last 3 years. Based on Apollo documentation, they encourage developers to use it as the single source of truth:
We want to be able to access boolean flags and device API results from
multiple components in our app, but don't want to maintain a separate
Redux or MobX store. Ideally, we would like the Apollo cache to be the
single source of truth for all data in our client application
I'm trying to figure out the way to replicate with Apollo what Redux would allow me. In my application, I have "Tags", an array of objects with close to 15 different fields each. They are used on 3 different sections of my app and each section shows specific "Tags" as well as specific fields from the "Tags". Based on that, the way I approach this with Redux is to fetch the "Tags" from my API, and, in the reducer, I create different arrays containing the IDs of the specific "Tags" I need for each section and I also create a Map (id, value) with the original data. It would be something like:
const tags = new Map(); //(tagId, tag) containing all the tags
const sectionATags = []; // array of ids for section A tags
const sectionBTags = []; // array of ids for section B tags
const sectionCTags = []; // array of ids for section C tags
My goal is to replicate the same behavior but, even though they encourage you to manage your local state using Apollo, I'm not sure if what I want to achieve is possible in a simple way, or if it is actually a good practice to do so with Apollo. I've been following this example from the docs and what they mainly do is adding or removing extra fields to the data received from the server by extending the query or mutating the cached data with the #client directive.
At the same time, I understand GraphQL was meant to query the specific data you need, instead of the typical REST request where you would get a big JSON with all the data regardless of whether you need it or not, but it feels like it wouldn't be efficient to do 3 different queries in this case with the specific data I need for each section.
I'm not sure if I'm missing something or maybe Apollo is thought for "simpler local state management". Am I following the right path or shall I keep using Redux with another GraphQL library that just allows me to fetch data without a management layer?
Apollo has normalized cache ... you can query once all [fields required anywhere] data - subsequent/additional queries can be "cache only". No additional requests will be made - it's safe if all required fields already exists.
Converting data structure ...usually not needed, we're working on data [and shape] we need ("not our but backend problem"). You can:
change data shape in resolver [in graphql API wrapping REST],
reshape data in REST-link (direct REST API access from Apollo),
read data once at startup, save converted in local (all following queries reads local)
Common app state can be left in redux - data fetching should be in apollo.
Redux guidelines suggest to
think at an app's state as a database
and to prefer key-based objects over arrays when it comes to store resources. This makes totally sense, since it simplifies 99% of the most common use cases when dealing with collections: search, find, add more, remove, read...
Unfortunately the downsides show up when it comes to keep a filterable and sortable collection of resources synched with APIs responses. For example a typical request:
GET /users?status=active&orderBy=name&orderDir=asc&lastID=1234&limit=10
will return a filtered, sorted and paged, list (array) of users. Typically the reducer will then take this array do something like:
users: {...state.users, keyBy(action.payload, 'id')}
This will merge new data with previously fetched ones breaking the computation done from APIS.
The app must then perform a second, client-side, computation on the collection to reconstruct the expected list. This results in:
redundant computation (redo something that has been already done by the server)
duplication of logic (same filtering and sorting code deployed both client-side and server-side)
maintenance coast (client app developers must take the extra burden to keep filters and sort logic synched every time it changes on the backend to guarantee consistency)
Another downside, if you are implementing a sort of infinite loading, is that you must keep track of the lastID, since there is not way do deduce what is the last loaded id after results have been merged.
So the question:
What's the best approach to design stores and reducers that must deal with sorted/filterd/paged data fetched via APIs?
On of the common approaches is to keep object-index map and sorted object-list in separate structures.
Example reducer (using ramda):
function reducer(state, action) {
if (action.type == 'USERS_LOADED') {
return r.merge(state, {
userIndex: ramda.reduce(
(acc, user) => ramda.assoc(user.id, user, acc),
{},
action.payload
),
userList: ramda.map(
ramda.prop('id'),
action.payload
)
}
}
}
Example connect selector:
connect(
state => ({
users: state.userList.map(id => state.userIndex[id]) // this reconstructs users in original order
})
)(...)
You can also use open-source project DataScript.
An immutable in-memory database and Datalog query engine.
DataScript is meant to run inside the browser. It is cheap to create,
quick to query and ephemeral. You create a database on page load, put
some data in it, track changes, do queries and forget about it when
the user closes the page.
DataScript databases are immutable and based on persistent data
structures. In fact, they’re more like data structures than databases
(think Hashmap). Unlike querying a real SQL DB, when you query
DataScript, it all comes down to a Hashmap lookup. Or series of
lookups. Or array iteration. There’s no particular overhead to it. You
put a little data in it, it’s fast. You put in a lot of data, well, at
least it has indexes. That should do better than you filtering an
array by hand anyway. The thing is really lightweight.
It has nice Javascript API. Usage example (a bit outdated). Discussion by redux core team.
What's the best way to handle a case where multiple components are using one store (which is populated by API calls in an action creator), but each component may need to access different sets of data without effecting the other. I.e.: two table components display data from the WidgetStore; one table wants all widgets, the other only displays widgets whose name contains "foo" (this would be based on user input). The table being queried via the API has tens of thousands of widgets, so loading them all into the store and filtering from the store isn't practical. Is there a Flux architecture (like Redux) that already has a way of handling this type of thing?
The simplest way is to just create a parent component and selectively hand off data, using a pluck or selector function, to each of the children.
For the more general answer for yourself going forward... if you follow something along the lines like redux there is already proven patterns which will help you understand passing complex data down.
I'm in a project where we're currently using Redux for state management in our React-based Single page application, and we've run into an issue regarding when/how to clean out unused data from our stores (technically sub-state on or global Redux store).
For example we have a calendar "store" which looks
calendar = {
"2015-11-06": {
// Loads of data
},
... // More dates
}
Mostly we only care about a single date at the time, but there are cases where there are different components that needs different calendar date at the same time.
The question is: Is there some kind of strategy to "garbage collect" stores?
My initial thought is that components that need a specific calendar date will have to "reserve" that date and when it's unmounted, it'll remove its reservation. That way, when we reach some kind of size limit we can just remove all date that aren't reserved by any component.
It's a bit of a hassle though since it adds the need for components to handle "reservations" when fetching a date and when the component unmounts.
Is this a feasible strategy or is there a better alternative?
Sounds like a good use-case for a WeakSet or a WeakMap.
WeakMap holds references to its keys weakly, meaning that if there are
no other references to one of its keys, the object is subject to
garbage collection.
The key to all of this lies in a combination of this statement:
Mostly we only care about a single date at the time, but there are cases where there are different components that needs different calendar date at the same time.
...and how we think about state in an architecture like Flux/Redux.
There's nothing stopping you from restructuring your existing data store like so:
calendar = {
mainDate: {
date: "2015-11-06",
// Loads of data
}
}
Then, whenever you hit one of those special cases where you need more than one date, you issue an action that replaces the calendar state with something that looks like this:
calendar = {
mainDate: {
date: "2015-11-06",
// Loads of data
},
otherDate: {
date: "2016-02-29",
// Other data. Perhaps even less than the loads you'd have in mainDate
}
}
Somewhere along the line, your components will decide for themselves whether they need to look at mainDate or otherDate. (They may very well extract the appropriate one and then pass the contents down to their child components; you may want to introduce an abstraction layer here.)
And when the other component is done using the other date, it'll issue another action that generates:
calendar = {
mainDate: {
date: "2015-11-06",
// Loads of data
}
}
...thus automatically taking care of your garbage collection concern.
Obviously, there's a lot of implementation detail to be worked out here, but that's specific to your situation. The key concept is to contain all the state (and only the state) that you need to run the app at any given time, and use actions to make the transition from one state to another.
I'm learning React and flux.
Say there's a route like
// Route
/continent/:continentId/countries
// Example
/continent/europe/countries
There are two stores, a ContinentsStore and a CountriesStore
What's the best design pattern, in Flux, so that when the route is loaded, the ContinentsStore asynchronously fetches a list of all the continents, sets the current one to Europe, and then CountriesStore looks for the current continent in ContinentsStore, then proceeds to download a list of all the countries in that Continent.
Specifically, where are the action creators, and what might the actions types be?
Thanks.
There is an aggregation store concept in reflux.js. So one store can listen to another one and provides additional logic that depends on data changes in an observable store. This approach is useful for data aggregation and chaining operations.