I have some queries and mutations and one mutation updates quite big entity so I want to add optimistic response after mutate function is fired. The thing is that even if I pass to optimisticResponse object, full data that will be also returned when mutation completes the job it does not add it to cache - it seems that data is refreshed when mutation response is ready since having optimistic response or not the time of updating UI is the same so I assume optimistic response does not work.
Some code examples I have:
Mutation:
mutation UpdateList($id: ID, $data: ListData) {
updateList(id: $id, data: $data) {
list_id // 1
name
}
}
action
const [action] = useMutation(mutation_from_above)
// async body function so await can be used
await action({ variables: { id: 1, data: { name: 'secret name' }}, optimisticResponse: {
updateList: {
__typename: 'List',
list_id: 1,
name: 'some updated name until data is back'
}
})
and of course I have updated my id field for typename in the cache config like that:
cache: new InMemoryCache({
typePolicies: {
List: {
keyFields: ['list_id'],
},
},
}),
and it looks pretty simple but for me it does not work. I also checked on API side and response from mutation is the same I pass to the optimisticResponse object. Is there some important point why it is not working ? Can someone explain me what to do in order to get this work ?
Thanks, cheers!
Related
I have a GraphQL query setup using the Redux Toolkit's "RTK Query" data fetching functionality. After a mutation related to this query I want to add the returned data from the mutation to be added to the cache without calling the query to the server again. I used the thunk action creator upsertQueryData from the API slices utilities for this. (Reference Documentation).
So far I was only able to overwrite the complete cache collection collection related to the Query but did not find a way to just add 1 entry. Perhaps someone knows what I'm doing wrong?
The GraphQL Query, that is working fine. It returns a collection of 'sites'.
endpoints: (builder) => ({
getSites: builder.query({
query: () => ({
document: gql`
query MyQuery {
sites {
id
name
description
}
}
`,
}),
}),
...
The mutation with usage of upsertQueryData. This overwrites the whole collection of 'sites' of the cache instead of used adding 1 site. To be clear: When sending the mutation I don't have an id yet, that is returned by the server through the mutation callback.
createSite: builder.mutation({
query: ({name}) => ({
document: gql`
mutation createSite {
createSite(
name: "${name}"
description: "The workspace where Peter works from home in Dordrecht",
) {
site {
id
name
description
}
}
}
`
}),
async onQueryStarted({}, { dispatch, queryFulfilled }) {
const { data } = await queryFulfilled;
const newSiteEntry = data.createSite.site;
sites.util.upsertQueryData('getSites', { newSiteEntry.id }, newSiteEntry);
}
I expect to that it adds 1 date object to the site cache object instead of overwriting it. So you will get something like this in the cache:
sites: [
{id: '1', name: 'Existing site 1', description: 'description 1'},
{id: '2', name: 'Existing site 2', description: 'description 2'},
{id: '3', name: 'New site', description: 'new description'},
]
You generally have a wrong concept of the cache here. Since your getSites endpoint takes no argument and you probably only ever call useGetSitesQuery(), there is only ever one cache entry for that (called getSites(undefined)), and you want to update that existing cache entry with additional lines.
upsertQueryData is for overwriting that whole cache entry with a new value, or in your case, creates completely unrelated cache entries that you will never read from - not what you want to do.
As a result, you want to updateQueryData for that one existing cache entry instead:
dispatch(
api.util.updateQueryData('getSites', undefined, (draft) => {
draft.sites.push(newSiteEntry)
})
)
Keep in mind though that generally we recommend using providesTags/invalidatesTags to automatically refetch other endpoints instead of manually doing optimisistic updates on them.
I have become slightly lost with react-query. Essentially, I have a useQuery to fetch a user from my database. Their details are added to a form and they can update and submit.
The problem I have is that the update is done to a different database. The main database will be batch updated at a later point. As such, instead of refetching the initial data, I need to use setQueryData to update the cache version.
queryClient = useQueryClient()
const { mutate } = useMutation(postUser, {
onSuccess: async (response) => {
console.log(response)
queryClient.cancelQueries('user');
const previousUser = queryClient.getQueryData('user');
console.log(previousUser)
queryClient.setQueryData('user', {
...previousUser,
data: [
previousUser.data,
{ '#status': 'true' },
],
})
return () => queryClient.setQueryData('user', previousUser)
}
})
At the moment I have something like the above. So it calls postUser and gets a response. The response looks something like so
data:
data:
user_uid: "12345"
status: "true"
message: "User added."
status: 1
I then getQueryData in order to get the cache version of the data, which currently looks like this
data:
#userUuid: "12345"
#status: ""
message: "User found."
status: 1
So I need to update the cached version #status to now be true. With what I have above, it seems to add a new line into the cache
data: Array(2)
0: {#userUuid: "12345", #status: ""}
1: {#status: "true"}
message: "User found."
status: 1
So how do I overwrite the existing one without adding a new row?
Thanks
This is not really react-query specific. In your setQueryData code, you set data to an array with two entries:
data: [
previousUser.data,
{ '#status': 'true' },
],
first entry = previousUser.data
second entry = { '#status': 'true' }
overriding would be something like this:
queryClient.setQueryData('user', {
...previousUser,
data: {
...previousUser.data,
'#status': 'true',
},
})
On another note, it seems like your mixing optimistic the onMutate callback with the onSuccess callback. If you want to do optimistic updates, you'd implement the onMutate function in a similar way like you've done it above:
cancel outgoing queries
set data
return "rollback" function that can be called onError
This is basically the workflow found here in the docs.
if you implement onSuccess, you're updating the cache after the mutation was successful, which is also a legit, albeit different, use-case. Here, you don't need to return anything, it would be more similar to the updates from mutation responses example.
I am using GraphQL with Apollo-Client in my React(Typescript) application with an in memory cache. The cache is updated on new items being added which works fine with no errors.
When items are removed a string is returned from GraphQL Apollo-Server backend stating the successful delete operation which initiates the update function to be called which reads the cache and then modifies it by filtering out the id of the item. This is performed using the mutation hook from Apollo-Client.
const [deleteBook] = useMutation<{ deleteBook: string }, DeleteBookProps>(DELETE_BOOK_MUTATION, {
variables: { id },
onError(error) {
console.log(error);
},
update(proxy) {
const bookCache = proxy.readQuery<{ getBooks: IBook[] }>({ query: GET_BOOKS_QUERY });
if (bookCache) {
proxy.writeQuery<IGetBooks>({
query: GET_BOOKS_QUERY,
data: { getBooks: bookCache.getBooks.filter((b) => b._id !== id) },
});
}
},
});
The function works and the frontend is updated with the correct items in cache, however the following error is displayed in the console:
Cache data may be lost when replacing the getBooks field of a Query object.
To address this problem (which is not a bug in Apollo Client), define a custom merge function for the Query.getBooks field, so InMemoryCache can safely merge these objects:
existing: [{"__ref":"Book:5f21280332de1d304485ae80"},{"__ref":"Book:5f212a1332de1d304485ae81"},{"__ref":"Book:5f212a6732de1d304485ae82"},{"__ref":"Book:5f212a9232de1d304485ae83"},{"__ref":"Book:5f21364832de1d304485ae84"},{"__ref":"Book:5f214e1932de1d304485ae85"},{"__ref":"Book:5f21595a32de1d304485ae88"},{"__ref":"Book:5f2166601f6a633ae482bae4"}]
incoming: [{"__ref":"Book:5f212a1332de1d304485ae81"},{"__ref":"Book:5f212a6732de1d304485ae82"},{"__ref":"Book:5f212a9232de1d304485ae83"},{"__ref":"Book:5f21364832de1d304485ae84"},{"__ref":"Book:5f214e1932de1d304485ae85"},{"__ref":"Book:5f21595a32de1d304485ae88"},{"__ref":"Book:5f2166601f6a633ae482bae4"}]
For more information about these options, please refer to the documentation:
* Ensuring entity objects have IDs: https://go.apollo.dev/c/generating-unique-identifiers
* Defining custom merge functions: https://go.apollo.dev/c/merging-non-normalized-objects
Is there a better way to update the cache so this error won't be received?
I too faced the exact same warning, and unfortunately didn't come up with a solution other than the one suggested here: https://go.apollo.dev/c/merging-non-normalized-objects
const client = new ApolloClient({
....
cache: new InMemoryCache({
typePolicies: {
Query: {
fields: {
getBooks: {
merge(existing, incoming) {
return incoming;
},
},
},
},
}
}),
});
(I am not sure weather I wrote your fields and types correctly though, so you might change this code a bit)
Basically, the code above let's apollo client how to deal with mergeable data. In this case, I simply replace the old data with a new one.
I wonder though, if there's a better solution
I've also faced the same problem. I've come across a GitHub thread that offers two alternative solutions here.
The first is evicting what's in your cache before calling cache.writeQuery:
cache.evict({
// Often cache.evict will take an options.id property, but that's not necessary
// when evicting from the ROOT_QUERY object, as we're doing here.
fieldName: "notifications",
// No need to trigger a broadcast here, since writeQuery will take care of that.
broadcast: false,
});
In short this flushes your cache so your new data will be the new source of truth. There is no concern about losing your old data.
An alternative suggestion for the apollo-client v3 is posted further below in the same thread:
cache.modify({
fields: {
notifications(list, { readField }) {
return list.filter((n) => readField('id', n) !==id)
},
},
})
This way removes a lot of boilerplate so you don't need to use readQuery, evict, and writeQuery. The problem is that if you're running Typescript you'll run into some implementation issues. Under-the-hood the format used is InMemoryCache format instead of the usual GraphQL data. You'll be seeing Reference objects, types that aren't inferred, and other weird things.
I am trying to update my chache after succesfully executing a mutation. Here is my query and mutation:
export const Dojo_QUERY = gql`
query Dojo($id: Int!){
dojo(id: $id){
id,
name,
logoUrl,
location {
id,
city,
country
},
members{
id
},
disziplines{
id,
name
}
}
}`;
export const addDiszipline_MUTATION = gql`
mutation createDisziplin($input:DisziplineInput!,$dojoId:Int!){
createDisziplin(input:$input,dojoId:$dojoId){
disziplin{
name,
id
}
}
}`;
and my mutation call:
const [createDisziplin] = useMutation(Constants.addDiszipline_MUTATION,
{
update(cache, { data: { createDisziplin } }) {
console.log(cache)
const { disziplines } = cache.readQuery({ query: Constants.Dojo_QUERY,variables: {id}});
console.log(disziplines)
cache.writeQuery({
...some update logic (craches in line above)
});
}
}
);
when i execute this mutation i get the error
Invariant Violation: "Can't find field dojo({"id":1}) on object {
"dojo({\"id\":\"1\"})": {
"type": "id",
"generated": false,
"id": "DojoType:1",
"typename": "DojoType"
}
}."
In my client cache i can see
data{data{DojoType {...WITH ALL DATA INSIDE APPART FROM THE NEW DISZIPLINE}}
and
data{data{DisziplineType {THE NEW OBJECT}}
There seems to be a lot of confusion around the client cache around the web. Somehow none of the posed solutions helped, or made any sense to me. Any help would be greatly appreciated.
EDIT 1:
Maybe this can help?
ROOT_QUERY: {…}
"dojo({\"id\":\"1\"})": {…}
generated: false
id: "DojoType:1"
type: "id"
typename: "DojoType"
<prototype>: Object { … }
<prototype>: Object { … }
Edit 2
I have taken Herku advice and started using fragment. however it still seems to not quite work.
My udated code:
const [createDisziplin] = useMutation(Constants.addDiszipline_MUTATION,
{
update(cache, { data: { createDisziplin } }) {
console.log(cache)
const { dojo } = cache.readFragment(
{ fragment: Constants.Diszilines_FRAGMENT,
id:"DojoType:"+id.toString()});
console.log(dojo)
}
}
);
with
export const Diszilines_FRAGMENT=gql`
fragment currentDojo on Dojo{
id,
name,
disziplines{
id,
name
}
}
`;
however the result from console.log(dojo) is still undefined.Any advice?
So I think your actual error is that you have to supply the ID as as a string: variables: {id: id.toString()}. You can see that these two lines are different:
dojo({\"id\":1})
dojo({\"id\":\"1\"})
But I would highly suggest to use readFragment instead of readQuery and update the dojo with the ID supplied. This should update the query as well and all other occurrences of the dojo in all your queries. You can find documentation on readFragment here.
Another trick is as well to simply return the whole dojo in the response of the mutation. I would say people should be less afraid of that and not do to much cache updates because cache updates are implicit behaviour of your API that is nowhere in your type system. That the new disziplin can be found in the disziplins field is now encoded in your frontend. Imagine you want to add another step here where new disziplins have to be approved first before they end up in there. If the mutation returns the whole dojo a simple backend change would do the job and your clients don't have to be aware of that behaviour.
Similar to the question here I have found that when using optimisticResponse and update for a mutation, that the id set from the response of the server is wrong. Furthermore, the id actually gets set by running the optimistic function again.
In the mutation below the refetchQueries is comment out on purpose. I don't want to use that. I want to manage everything through the update only.
Also notice the optimisticResponse id has a "-" prepended to it to prove the optimistic function is run twice:
id: "-" _ uuid(),
Mutation
graphql(MutationCreateChild, {
options: {
// refetchQueries: [{QueryAllChildren, variables: {limit: 1000}}],
update: (proxy, {data: {createChild}}) => {
const query = QueryAllChildren;
const data = proxy.readQuery({query});
data.listChildren.items.push(createChild);
proxy.writeQuery({query, data});
console.log("id: ", createChild.id);
}
},
props: props => ({
createChild: child => {
return props.mutate({
variables: child,
optimisticResponse: () => ({
createChild: {
...child,
id: "-" + uuid(),
__typename: "Child"
}
})
});
}
})
})
The output from the console.log statement is:
id: -6c5c2a28-8bc1-49fe-92e1-2abade0d06ca
id: -9e0a1c9f-d9ca-4e72-88c2-064f7cc8684e
While the actual request in the chrome developer console looks like this:
{"data":{"createChild":{"id":"f5bd1c27-2a21-40c6-9da2-9ddc5f05fd40",__typename":"Child"}}}
Is this a bug or am I not accessing the id correctly in the update function?
It's a known issue, which has now been fixed. I imagine it'll get released to the npm registry soon.
https://github.com/awslabs/aws-mobile-appsync-sdk-js/pull/43
https://github.com/awslabs/aws-mobile-appsync-sdk-js/commit/d26ea1ca1a8253df11dea8f11c1749e7bad8ef05
Using your setup, I believe it is normal for the update function to be called twice and you are correct that the real id from the server will only be there the second time. Apollo takes the object you return from optimisticResponse and passes it to the update function so your UI can immediately show changes without waiting for the server. When the server response comes back, the update function is called again with the same state (i.e. the state without the optimistic result) where you can reapply the change with the correct value from the server.
As for why the second id you list with the '-' is not the same as the id you see in the chrome dev console, I am not sure. Are you sure that it was actually that request that matched up with that call to console.log?