I'm using Apollo Client and React and I'm looking for a strategy to keep my component and component data requirements colocated in such a way that it can be accessible to parent/sibling/child components that might need it for queries and mutations. I want to be able to easily update the data requirements which in turn will update the fields that are queried by some parent component or returned by a mutation in a parent/sibling/child in order to accurately update my Apollo cache.
I have tried creating a global high level graphql directory where all my queries/mutations.graphql files are located, importing all the related fragment files located throughout my app, and then importing those directly, but this can get tedious and doesn't follow the parent/child theme where parent queries include children fragments. Also in large projects you end up traversing long file paths when importing.
I have also tried just creating fragment files colocated in the global graphql directory that correspond to component files but this doesn't give me the "component/data requirement" colocation I'm looking for.
This works:
class CommentListItem extends Component {
static fragments = {
comment: gql`
#...
`,
}
}
class CommentList extends Component {
static fragments = {
comment: gql`
#...
${CommentListItem.fragments.comment}
`,
}
}
class CommentsPage extends Component {
static fragments = {
comment: gql`
#...
${CommentList.fragments.comment}
`,
}
}
graphql(gql`
query Comments {
comments {
...CommentsListItemComment
}
}
${CommentsPage.fragments.comment}
`)
However, if I want a mutation in a descendent of CommentsPage I can't reference the fragment composition from CommentsPage.fragments.comment.
Is there a preferred method or best practice for this type of thing?
Structuring Queries
How to structure your code is always a matter of a personal taste but I think the collocation of queries and components is a big strength of GraphQL.
For queries I took a lot of inspiration from Relay Modern and the solution looks very close to what you described in the code. Right now as the project becomes bigger and we want to generate Flow type definitions for our queries, putting them into separate files next to the component files is also an option. This will be very similar to CSS-modules.
Structuring Mutations
When it comes to mutations it often gets much harder to find a good place for them. Mutations need to be called on events far down the component tree and often change the state of the application in multiple states of the app. In this case you want the caller to be unaware of the data consumers. Using fragments might seem like an easy answer. The mutation would just include all fragments that are defined for a specific type. While the mutation now does not need to know which fields are required it needs to know who requires fields on the type. I want to point out two slightly different approaches that you can use to base your design on.
Global Mutations: The Relay Approach
In Relay Modern Mutations are basically global operations, that can be triggered by any component. This approach is not to bad since most mutations are only written once and thanks to variables are very reusable. They operate on one global state and don't care about which UI part consumes the update. When defining a mutation result you should usually query the properties that might have changed by the mutation instead of all the properties that are required by other components (through fragments). E.g. the mutation likeComment(id: ID!) should probably query for the likeCount and likes field on comment and not care much if any component uses the field at all or what other fields components require on Comment. This approach gets a bit more difficult when you have to update other queries or fields. the mutation createComment(comment: CreateCommentInput) might want to write to the root query object's comments field. This is where Relays structure of nodes and edges comes in handy. You can learn more about Relay updates here.
# A reusable likeComment mutation
mutation likeComment($id: ID!) {
likeComment(id: $id) {
comment {
id
likeCount
likes {
id
liker {
id
name
}
}
}
}
}
Unfortunately we cannot answer one question: How far should we go? Do I need the names of the people liking the comments or does the component simply display a number of likes?
Mutations in Query Container
Not all GraphQL APIs are structured the Relay way. Furthermore Apollo binds mutations to the store similar to Redux action creators. My current approach is to have mutations on the same level as queries and then passing them down. This way you can access the children's fragments and use them in the mutations if needed. In your example the CommentListItem component might display a like button. It would define a fragment for the data dependencies, prop types according to the fragment and a function prop type likeComment: (id: string) => Promise<any>. This prop type would be passed through to the query container that wraps the CommentsPage in a query and mutation.
Summary
You can use both approaches with Apollo. A global mutations folder can contain mutations that can be used anywhere. You can then directly bind the mutations to the components that need them. One benefit is that e.g. in the likeComment example the variable id can be directly derived from the components props and does not need to be bound within the component itself. Alternatively you can pass mutations through from you query components. This gives you a broader overview of the consumers of data. In the CommentsPage it can be easier to decide what needs to be updated when a mutation completed.
Let me know what you think in the comments!
Related
I'm pretty new to GraphQL and Apollo but I've been using Redux along with React for the last 3 years. Based on Apollo documentation, they encourage developers to use it as the single source of truth:
We want to be able to access boolean flags and device API results from
multiple components in our app, but don't want to maintain a separate
Redux or MobX store. Ideally, we would like the Apollo cache to be the
single source of truth for all data in our client application
I'm trying to figure out the way to replicate with Apollo what Redux would allow me. In my application, I have "Tags", an array of objects with close to 15 different fields each. They are used on 3 different sections of my app and each section shows specific "Tags" as well as specific fields from the "Tags". Based on that, the way I approach this with Redux is to fetch the "Tags" from my API, and, in the reducer, I create different arrays containing the IDs of the specific "Tags" I need for each section and I also create a Map (id, value) with the original data. It would be something like:
const tags = new Map(); //(tagId, tag) containing all the tags
const sectionATags = []; // array of ids for section A tags
const sectionBTags = []; // array of ids for section B tags
const sectionCTags = []; // array of ids for section C tags
My goal is to replicate the same behavior but, even though they encourage you to manage your local state using Apollo, I'm not sure if what I want to achieve is possible in a simple way, or if it is actually a good practice to do so with Apollo. I've been following this example from the docs and what they mainly do is adding or removing extra fields to the data received from the server by extending the query or mutating the cached data with the #client directive.
At the same time, I understand GraphQL was meant to query the specific data you need, instead of the typical REST request where you would get a big JSON with all the data regardless of whether you need it or not, but it feels like it wouldn't be efficient to do 3 different queries in this case with the specific data I need for each section.
I'm not sure if I'm missing something or maybe Apollo is thought for "simpler local state management". Am I following the right path or shall I keep using Redux with another GraphQL library that just allows me to fetch data without a management layer?
Apollo has normalized cache ... you can query once all [fields required anywhere] data - subsequent/additional queries can be "cache only". No additional requests will be made - it's safe if all required fields already exists.
Converting data structure ...usually not needed, we're working on data [and shape] we need ("not our but backend problem"). You can:
change data shape in resolver [in graphql API wrapping REST],
reshape data in REST-link (direct REST API access from Apollo),
read data once at startup, save converted in local (all following queries reads local)
Common app state can be left in redux - data fetching should be in apollo.
I'm learning how to use Apollo Client for React and how to manage local state using the cache. From the docs, it's as simple as writing to the cache using cache.writeData and reading from it using a simple query. Their example is
const GET_VISIBILITY_FILTER = gql`
{
visibilityFilter #client
}
`;
I then wrap my JSX in a Query and can read the value fine (in my case loggedIn)
return <Query query={GET_LOGGED_IN}>
{({loading, error, data}) => {
const {loggedIn} = data
I'm curious, though, why I don't need to write a resolver for this to work. Is it because with scalar values if a value exists at the root of an object, that is, here at the top level of the cache, Apollo/GraphQL automatically just grabs that value and sends it to you without a resolver?
What are the limits of this, that is, could you grab arrays at the root level without writing a resolver? Objects? I'm assuming not, as these don't seem to be scalars. But if the data is hard-coded, that is, doesn't require any DB lookup, the answer is still no?
From the docs:
[The #client directive] tells Apollo Client to fetch the field data locally (either from the cache or using a local resolver), instead of sending it to our GraphQL server.
If the directive is present on a field, Apollo will attempt to resolve the field using the provided resolver, or fall back to fetching directly from the cache if one doesn't exist. You can initialize your cache with pretty much any sort of data at the root level (taking care to include __typename fields for objects) and you should be able to fetch it without having to also provide a resolver for it. On the other hand, providing a resolver can provide you with more granular control over what's actually fetched from the cache -- i.e. you could initialize the cache with an array of items, but use a resolver to provide a way to filter or sort them.
There's an import nuance here: Fetching without a resolver only works when there's data in the cache to fetch. That's why it's important to provide initial state for these fields when building your client. If you have a more deeply nested #client field (for example, maybe you're including additional information alongside data fetched from the server), you also technically don't have to write a resolver. But we typically do write them because there is no existing data in the cache for those nested fields.
In addition to a great (as usual) answer from Daniel, I would like to add a few words.
You can use (read/write) objects to cache and manipulate its properties directly.
Using resolvers and mutations for local data can help with readability, data access/change unification, overal manageability or future changes (move feature/settings to server).
More practical/advanced example of local state managament you can find in apollo-universal-starter-kit project.
I'm working on a part of a React app in which a high-level component creates and passes certain props down through a few layers of components that don't use them to a final component that does.
When validating props with propTypes, is there a good reason to list these props to be checked at every level, going down through the layers? Or is it acceptable to check them only in the final component that uses them?
It seems to me that the former method is redundant; the latter seems to make more sense to me, but I'm curious if there is a reason why I ought to do the former. I haven't seen any discussion on it, which could mean it's an unimportant question, but I'd be interested to know.
I agree with you about if you use props only for dril down for children in the tree, it can be done only once at the leaf components, where you realy use this data. I recently find out that one more place is important for props validation: the components which fetch data from out of app scope, such as backend, because sometimes the structure of the data changes or the data types, then it will be dificult to find which part is broken without props validation.
I followed the Apollo documentation to provide two mutations (createUser then signInUser) on a single React component, but one mutation (the "outer" one - signInUser) is not accessible to my code (this.props.signInUser is not a function). Maybe my server-side-enabled setup is masking one mutation, but I don't see where. Help appreciated :)
See full code here.
EDIT: same issue when using compose, see code.
You just need to name the mutations when passing them into the component, otherwise they are ALL called mutate, and override one another.
(By using the props function)
Here's a full example with named mutations:
https://gist.github.com/Siyfion/a2e9626ed431f8ff91af2c9b8cba1d67
It was caused by my apollo higher-order component which did more complex things than just calling Apollo's graphql (related to server-side rendering) and somehow must be masking properties. I bypassed this SSR behaviour for these mutations (not needed), see code.
I'm using the ultimate combination of React + Redux + Reselect + Immutable.js in my application. I like the idea of reselect because it lets me keep my state (maintained by the reducers) as simple as possible. I use a selector to calculate the actual state I need which is then fed to the React components.
The problem here is that a small change in once of the reducers causes the selectors to recalculate the whole derived output and as the result also the whole React UI is updated. My pure components don't work. It's slow.
Typical example: The first part of my data comes from server and is basically immutable. The second part is maintained by the client and is mutated using the redux actions. They are maintained by separate reducers.
I use a selector to merge both parts into a single list of Records which is then passed to the React components. But obviously, when I change a single thing in one of the objects, the whole list is regenerated and new instances of Records is created. And the UI is completely re-rendered.
Obviously running the selector every time is not exactly efficient but is still reasonably fast and I'd be willing to make that trade off (because it does make the code way simpler and cleaner). The problem is the actual rendering which is slow.
What I'd need to do would be to deep merge the new selector output with the old one because Immutable.js library is smart enough not to create new instances when nothing was changed. But as selectors are simple functions that do not have access to previous outputs, I guess it's not possible.
I assume that my current approach is wrong and I'd like to hear other ideas.
Probably the way to go would be to get rid of reselect in this case and move the logic into a hierarchy of reducers that would use incremental updates to maintain the desired state.
I solved my problem but I guess there is no right answer as it really depends on a specific situation. In my case, I decided to go with this approach:
One of the challenges that the original selector handled nicely was that the final information was compiled from many pieces that were delivered in an arbitrary order. If I decided to build up the final information in my reducers incrementally, I'd have to make sure to count with all possible scenarios (all possible orders in which the information pieces could arrive) and define transformations between all possible states. Whereas with reselect, I can simply take what I currently have and make something out of it.
To keep this functionality, I decided to move the selector logic into a wrapping parent reducer.
Okay, let's say that I have three reducers, A, B and C, and corresponding selectors. Each handles one piece of information. The piece could be loaded from server or it could originate from the user on the client side. This would be my original selector:
const makeFinalState(a, b, c) => (new List(a)).map(item =>
new MyRecord({ ...item, ...(b[item.id] || {}), ...(c[item.id] || {}) });
export const finalSelector = createSelector(
[selectorA, selectorB, selectorC],
(a, b, c) => makeFinalState(a, b, c,));
(This is not the actual code but I hope it makes sense. Note that regardless of the order in which the contents of individual reducers become available, the selector will eventually generate the correct output.)
I hope my problem is clear now. In case the content of any of those reducers changes, the selector is recalculated from scratch, generating completely new instances of all records which eventually results in complete re-renders of React components.
My current solution looks lite this:
export default function finalReducer(state = new Map(), action) {
state = state
.update('a', a => aReducer(a, action))
.update('b', b => bReducer(b, action))
.update('c', c => cReducer(c, action));
switch (action.type) {
case HEAVY_ACTION_AFFECTING_A:
case HEAVY_ACTION_AFFECTING_B:
case HEAVY_ACTION_AFFECTING_C:
return state.update('final', final => (final || new List()).mergeDeep(
makeFinalState(state.get('a'), state.get('b'), state.get('c')));
case LIGHT_ACTION_AFFECTING_C:
const update = makeSmallIncrementalUpdate(state, action.payload);
return state.update('final', final => (final || new List()).mergeDeep(update))
}
}
export const finalSelector = state => state.final;
The core idea is this:
If something big happens (i.e. I get a huge chunk of data from the server), I rebuild the whole derived state.
If something small happens (i.e. users selects an item), I just make a quick incremental change, both in the original reducer and in the wrapping parent reducer (there is a certain duplicity, but it's necessary to achieve both consistency and good performance).
The main difference from the selector version is that I always merge the new state with the old one. The Immutable.js library is smart enough not to replace the old Record instances with the new Record instances if their content is completely the same. Therefore the original instances are kept and as a result corresponding pure components are not re-rendered.
Obviously, the deep merge is a costly operation so this won't work for really large data sets. But the truth is that this kind of operations is still fast compared to React re-renders and DOM operations. So this approach can be a nice compromise between performance and code readability/conciseness.
Final note: If it wasn't for those light actions handled separately, this approach would be essentially equivalent to replacing shallowEqual with deepEqual inside shouldComponentUpdate method of pure components.
This kind of scenario can often be solved by refactoring how the UI is connected to the state. Let's say you have a component displaying a list of items: instead of connecting it to the already built list of items, you could connect it to a simple list of ids, and connect each individual item to its record by id. This way, when a record changes, the list of ids itself doesn't change and only the corresponding connected component is re-rendered.
If in your case, if the record is assembled from different parts of the state, the selector yielding individual records could itself be connected to the relevant parts of the state for this particular record.
Now, about the use of immutable.js with reselect: this combination works best if the raw parts of your state are already immutable.js objects. This way you can take advantage of the fact that they use persistent data structures, and the default memoization function from reselect works best. You can always override this memoization function, but feeling that a selector should access its previous return value if often a sign that it is in charge of data that should be hold in the state / or that it is gathering too much data at once, and that maybe more granular selectors could help.
It looks like you are describing a scenario very close to the one why I wrote re-reselect.
re-reselect is a small reselect wrapper, which initializes selectors on the fly using a memoized factory.
(Disclaimer: I'm the author of re-reselect).