React-Query update cache after mutation performed - reactjs

I have become slightly lost with react-query. Essentially, I have a useQuery to fetch a user from my database. Their details are added to a form and they can update and submit.
The problem I have is that the update is done to a different database. The main database will be batch updated at a later point. As such, instead of refetching the initial data, I need to use setQueryData to update the cache version.
queryClient = useQueryClient()
const { mutate } = useMutation(postUser, {
onSuccess: async (response) => {
console.log(response)
queryClient.cancelQueries('user');
const previousUser = queryClient.getQueryData('user');
console.log(previousUser)
queryClient.setQueryData('user', {
...previousUser,
data: [
previousUser.data,
{ '#status': 'true' },
],
})
return () => queryClient.setQueryData('user', previousUser)
}
})
At the moment I have something like the above. So it calls postUser and gets a response. The response looks something like so
data:
data:
user_uid: "12345"
status: "true"
message: "User added."
status: 1
I then getQueryData in order to get the cache version of the data, which currently looks like this
data:
#userUuid: "12345"
#status: ""
message: "User found."
status: 1
So I need to update the cached version #status to now be true. With what I have above, it seems to add a new line into the cache
data: Array(2)
0: {#userUuid: "12345", #status: ""}
1: {#status: "true"}
message: "User found."
status: 1
So how do I overwrite the existing one without adding a new row?
Thanks

This is not really react-query specific. In your setQueryData code, you set data to an array with two entries:
data: [
previousUser.data,
{ '#status': 'true' },
],
first entry = previousUser.data
second entry = { '#status': 'true' }
overriding would be something like this:
queryClient.setQueryData('user', {
...previousUser,
data: {
...previousUser.data,
'#status': 'true',
},
})
On another note, it seems like your mixing optimistic the onMutate callback with the onSuccess callback. If you want to do optimistic updates, you'd implement the onMutate function in a similar way like you've done it above:
cancel outgoing queries
set data
return "rollback" function that can be called onError
This is basically the workflow found here in the docs.
if you implement onSuccess, you're updating the cache after the mutation was successful, which is also a legit, albeit different, use-case. Here, you don't need to return anything, it would be more similar to the updates from mutation responses example.

Related

How to use RTK Query's 'upsertQueryData' to add a new entry to the cache?

I have a GraphQL query setup using the Redux Toolkit's "RTK Query" data fetching functionality. After a mutation related to this query I want to add the returned data from the mutation to be added to the cache without calling the query to the server again. I used the thunk action creator upsertQueryData from the API slices utilities for this. (Reference Documentation).
So far I was only able to overwrite the complete cache collection collection related to the Query but did not find a way to just add 1 entry. Perhaps someone knows what I'm doing wrong?
The GraphQL Query, that is working fine. It returns a collection of 'sites'.
endpoints: (builder) => ({
getSites: builder.query({
query: () => ({
document: gql`
query MyQuery {
sites {
id
name
description
}
}
`,
}),
}),
...
The mutation with usage of upsertQueryData. This overwrites the whole collection of 'sites' of the cache instead of used adding 1 site. To be clear: When sending the mutation I don't have an id yet, that is returned by the server through the mutation callback.
createSite: builder.mutation({
query: ({name}) => ({
document: gql`
mutation createSite {
createSite(
name: "${name}"
description: "The workspace where Peter works from home in Dordrecht",
) {
site {
id
name
description
}
}
}
`
}),
async onQueryStarted({}, { dispatch, queryFulfilled }) {
const { data } = await queryFulfilled;
const newSiteEntry = data.createSite.site;
sites.util.upsertQueryData('getSites', { newSiteEntry.id }, newSiteEntry);
}
I expect to that it adds 1 date object to the site cache object instead of overwriting it. So you will get something like this in the cache:
sites: [
{id: '1', name: 'Existing site 1', description: 'description 1'},
{id: '2', name: 'Existing site 2', description: 'description 2'},
{id: '3', name: 'New site', description: 'new description'},
]
You generally have a wrong concept of the cache here. Since your getSites endpoint takes no argument and you probably only ever call useGetSitesQuery(), there is only ever one cache entry for that (called getSites(undefined)), and you want to update that existing cache entry with additional lines.
upsertQueryData is for overwriting that whole cache entry with a new value, or in your case, creates completely unrelated cache entries that you will never read from - not what you want to do.
As a result, you want to updateQueryData for that one existing cache entry instead:
dispatch(
api.util.updateQueryData('getSites', undefined, (draft) => {
draft.sites.push(newSiteEntry)
})
)
Keep in mind though that generally we recommend using providesTags/invalidatesTags to automatically refetch other endpoints instead of manually doing optimisistic updates on them.

ApolloClient fetchMore with custom merge updates cache, but useQuery returns old data

I'm trying to implement pagination according to the ApolloClient core pagination guide: https://www.apollographql.com/docs/react/pagination/core-api
This is my type policy:
typePolicies: {
Query: {
fields: {
paginatedProductTracking: {
// Include everything except 'skip' and 'take' to be able to use `fetchMore`
// and repaginate when reading cache
// (essential for switching between desktop pagination and mobile lazy loading
// without having to refetch)
keyArgs: (args) => JSON.stringify(omit(args, ['query.skip', 'query.take'])),
merge: (existing = [], incomingResponse, { args }) => {
const responseData = incomingResponse?.paginatedData || [];
return [
// conservative merge that handles if pages are not requested in order
...existing.slice(0, args?.query.skip || 0),
...responseData,
...existing.slice((args?.query.skip || 0) + responseData.length),
];
},
},
},
},
},
As you see in the comment, one complication is that skip and take are in a nested arg called query, but it looks fine in the cache.
This is my components render function (leaving things out that should be irrelevant for this issue, but let me know if something is missing:
...
const initialQuery = {
skip: 0,
take: 3,
...
}
const { data, loading, fetchMore, networkStatus } = useProductTrackingAggregatedDataQuery({
notifyOnNetworkStatusChange: true,
variables: {
query: initalQuery,
});
...
return <InfinityScrollList
// Apollo merges values for `variables`, but only in a shallow way
// Hence merging the query params manually
onLoad={async () => {
await fetchMore({
variables: {
query: {
...initialQuery,
skip: items.length,
},
},
});
}}
/>
I feel like I'm doing the right thing, because the Apollo Cache looks as expected and it does update when I fetch more entries:
initial cache
after fetchMore
I can also see the expected network request.
The problem is that my component doesn't rerender :/
I forced rerendering by adding networkStatus to my query result, but I didn't get the merged result form the cache either (but the inital list). Doing this, I also noticed that I didn't receive the network status 3 (fetchMore), but I only see 1 (loading) and then 7 (standby).
Using the lazy hook could be a workaround, but I'd really like to avoid that because I'm trying to set an good example in the code base and it would ignore cache invalidation.
It might be relevant that my data doesn't have an id:
I'm on the latest ApolloClient version (3.7.1).
Providing a minimal working example for this would be tough, unfortunately.
ok, I found a solution and it's kind of an interesting things that I don't really see in the official documentation.
I was trying to write just the array content (paginatedData) in the cache and thought as long as I do that consistently I'm good:
merge: (existing = [], incomingResponse, { args }) => {
const responseData = incomingResponse?.paginatedData || [];
return [
// conservative merge that handles if pages are not requested in order
...existing.slice(0, args?.query.skip || 0),
...responseData,
...existing.slice((args?.query.skip || 0) + responseData.length),
];
},
And I did see the result of my custom merge function in the dev tools, which sounds like a bug. Because the query hook still returned the unmerged data, which is nested in an object (I didn't include this in the question):
const items = data?.paginatedProductTracking.paginatedData || [];
So it makes sense that this doesn't work, because the data I wrote in the cache did not conform with the data returned from the API.
But I was tripped by the dev tools suggesting it's working.
I solved it by writing the data in the cache with the same type and structure as the as the API response:
typePolicies: {
Query: {
fields: {
paginatedProductTracking: {
// Include everything except 'skip' and 'take' to be able to use `fetchMore`
// and repaginate when reading cache
// (essential for switching between desktop pagination and mobile lazy loading
// without having to refetch)
keyArgs: (args) => JSON.stringify(omit(args, ['query.skip', 'query.take'])),
merge: (existing, incoming, { args }) => {
if (!existing) {
return incoming;
}
if (!incoming) {
return existing;
}
const data = existing.paginatedData;
const newData = incoming.paginatedData;
return {
...existing,
// conservative merge that is robust against pages being requested out of order
paginatedData: [
...data.slice(0, args?.query.skip || 0),
...newData,
...data.slice((args?.query.skip || 0) + newData.length),
],
};
},
},
},
},
},
And that did the trick :)
fetchMore calls do not update the initial query's variables, all they do is fetch more data with a no-cache policy and write them to the cache with the implementation of your merge function for the given type policy. This is explained here
You will need to update the initial query's variables after the successful call to fetchMore to render the new data. Also the data will be read with the read function in your typePolicies for the given type or the entire cached result will be returned if none is provided. You should re-implement you pagination within the read function if returning the entire cached set is not the desired result (which would only be the case if your ui implements something like infinite scrolling).

Apollo Client 3 - how to properly implement optimistic response?

I have some queries and mutations and one mutation updates quite big entity so I want to add optimistic response after mutate function is fired. The thing is that even if I pass to optimisticResponse object, full data that will be also returned when mutation completes the job it does not add it to cache - it seems that data is refreshed when mutation response is ready since having optimistic response or not the time of updating UI is the same so I assume optimistic response does not work.
Some code examples I have:
Mutation:
mutation UpdateList($id: ID, $data: ListData) {
updateList(id: $id, data: $data) {
list_id // 1
name
}
}
action
const [action] = useMutation(mutation_from_above)
// async body function so await can be used
await action({ variables: { id: 1, data: { name: 'secret name' }}, optimisticResponse: {
updateList: {
__typename: 'List',
list_id: 1,
name: 'some updated name until data is back'
}
})
and of course I have updated my id field for typename in the cache config like that:
cache: new InMemoryCache({
typePolicies: {
List: {
keyFields: ['list_id'],
},
},
}),
and it looks pretty simple but for me it does not work. I also checked on API side and response from mutation is the same I pass to the optimisticResponse object. Is there some important point why it is not working ? Can someone explain me what to do in order to get this work ?
Thanks, cheers!

Correct way to remove item from client side cache in apollo-client

I am using GraphQL with Apollo-Client in my React(Typescript) application with an in memory cache. The cache is updated on new items being added which works fine with no errors.
When items are removed a string is returned from GraphQL Apollo-Server backend stating the successful delete operation which initiates the update function to be called which reads the cache and then modifies it by filtering out the id of the item. This is performed using the mutation hook from Apollo-Client.
const [deleteBook] = useMutation<{ deleteBook: string }, DeleteBookProps>(DELETE_BOOK_MUTATION, {
variables: { id },
onError(error) {
console.log(error);
},
update(proxy) {
const bookCache = proxy.readQuery<{ getBooks: IBook[] }>({ query: GET_BOOKS_QUERY });
if (bookCache) {
proxy.writeQuery<IGetBooks>({
query: GET_BOOKS_QUERY,
data: { getBooks: bookCache.getBooks.filter((b) => b._id !== id) },
});
}
},
});
The function works and the frontend is updated with the correct items in cache, however the following error is displayed in the console:
Cache data may be lost when replacing the getBooks field of a Query object.
To address this problem (which is not a bug in Apollo Client), define a custom merge function for the Query.getBooks field, so InMemoryCache can safely merge these objects:
existing: [{"__ref":"Book:5f21280332de1d304485ae80"},{"__ref":"Book:5f212a1332de1d304485ae81"},{"__ref":"Book:5f212a6732de1d304485ae82"},{"__ref":"Book:5f212a9232de1d304485ae83"},{"__ref":"Book:5f21364832de1d304485ae84"},{"__ref":"Book:5f214e1932de1d304485ae85"},{"__ref":"Book:5f21595a32de1d304485ae88"},{"__ref":"Book:5f2166601f6a633ae482bae4"}]
incoming: [{"__ref":"Book:5f212a1332de1d304485ae81"},{"__ref":"Book:5f212a6732de1d304485ae82"},{"__ref":"Book:5f212a9232de1d304485ae83"},{"__ref":"Book:5f21364832de1d304485ae84"},{"__ref":"Book:5f214e1932de1d304485ae85"},{"__ref":"Book:5f21595a32de1d304485ae88"},{"__ref":"Book:5f2166601f6a633ae482bae4"}]
For more information about these options, please refer to the documentation:
* Ensuring entity objects have IDs: https://go.apollo.dev/c/generating-unique-identifiers
* Defining custom merge functions: https://go.apollo.dev/c/merging-non-normalized-objects
Is there a better way to update the cache so this error won't be received?
I too faced the exact same warning, and unfortunately didn't come up with a solution other than the one suggested here: https://go.apollo.dev/c/merging-non-normalized-objects
const client = new ApolloClient({
....
cache: new InMemoryCache({
typePolicies: {
Query: {
fields: {
getBooks: {
merge(existing, incoming) {
return incoming;
},
},
},
},
}
}),
});
(I am not sure weather I wrote your fields and types correctly though, so you might change this code a bit)
Basically, the code above let's apollo client how to deal with mergeable data. In this case, I simply replace the old data with a new one.
I wonder though, if there's a better solution
I've also faced the same problem. I've come across a GitHub thread that offers two alternative solutions here.
The first is evicting what's in your cache before calling cache.writeQuery:
cache.evict({
// Often cache.evict will take an options.id property, but that's not necessary
// when evicting from the ROOT_QUERY object, as we're doing here.
fieldName: "notifications",
// No need to trigger a broadcast here, since writeQuery will take care of that.
broadcast: false,
});
In short this flushes your cache so your new data will be the new source of truth. There is no concern about losing your old data.
An alternative suggestion for the apollo-client v3 is posted further below in the same thread:
cache.modify({
fields: {
notifications(list, { readField }) {
return list.filter((n) => readField('id', n) !==id)
},
},
})
This way removes a lot of boilerplate so you don't need to use readQuery, evict, and writeQuery. The problem is that if you're running Typescript you'll run into some implementation issues. Under-the-hood the format used is InMemoryCache format instead of the usual GraphQL data. You'll be seeing Reference objects, types that aren't inferred, and other weird things.

Mutation with OptimisticResponse doesn't include server response data in second Update

Similar to the question here I have found that when using optimisticResponse and update for a mutation, that the id set from the response of the server is wrong. Furthermore, the id actually gets set by running the optimistic function again.
In the mutation below the refetchQueries is comment out on purpose. I don't want to use that. I want to manage everything through the update only.
Also notice the optimisticResponse id has a "-" prepended to it to prove the optimistic function is run twice:
id: "-" _ uuid(),
Mutation
graphql(MutationCreateChild, {
options: {
// refetchQueries: [{QueryAllChildren, variables: {limit: 1000}}],
update: (proxy, {data: {createChild}}) => {
const query = QueryAllChildren;
const data = proxy.readQuery({query});
data.listChildren.items.push(createChild);
proxy.writeQuery({query, data});
console.log("id: ", createChild.id);
}
},
props: props => ({
createChild: child => {
return props.mutate({
variables: child,
optimisticResponse: () => ({
createChild: {
...child,
id: "-" + uuid(),
__typename: "Child"
}
})
});
}
})
})
The output from the console.log statement is:
id: -6c5c2a28-8bc1-49fe-92e1-2abade0d06ca
id: -9e0a1c9f-d9ca-4e72-88c2-064f7cc8684e
While the actual request in the chrome developer console looks like this:
{"data":{"createChild":{"id":"f5bd1c27-2a21-40c6-9da2-9ddc5f05fd40",__typename":"Child"}}}
Is this a bug or am I not accessing the id correctly in the update function?
It's a known issue, which has now been fixed. I imagine it'll get released to the npm registry soon.
https://github.com/awslabs/aws-mobile-appsync-sdk-js/pull/43
https://github.com/awslabs/aws-mobile-appsync-sdk-js/commit/d26ea1ca1a8253df11dea8f11c1749e7bad8ef05
Using your setup, I believe it is normal for the update function to be called twice and you are correct that the real id from the server will only be there the second time. Apollo takes the object you return from optimisticResponse and passes it to the update function so your UI can immediately show changes without waiting for the server. When the server response comes back, the update function is called again with the same state (i.e. the state without the optimistic result) where you can reapply the change with the correct value from the server.
As for why the second id you list with the '-' is not the same as the id you see in the chrome dev console, I am not sure. Are you sure that it was actually that request that matched up with that call to console.log?

Resources