I'm executing this mutation in my NewBook component:
const [addBook] = useMutation(ADD_BOOK, {
update: (cache, response) => {
cache.updateQuery({ query: ALL_BOOKS }, ({ allBooks }) => {
return { allBooks: allBooks.concat(response.data.addBook) };
});
},
refetchQueries: [{ query: ALL_AUTHORS }, { query: ALL_GENRES }],
options: {
awaitRefetchQueries: true,}});
Instead of having to refetch those two queries, I'd like to update them like ALL_BOOKS - but could not find any example in the docs. Does anyone know a way to accomplish that?
Thank you.
What you need to do is make multiple cache updates based on response data.
Once you add your new book to the query, the next step is to fetch all authors.
cache.updateQuery({ query: ALL_BOOKS }, ({ allBooks }) => {
return { allBooks: allBooks.concat(response.data.addBook) };
});
//Get all authors
const existingAuthors = cache.readQuery({
query: ALL_AUTHORS,
//variables: {}
});
//If we never called authors, do nothing, as the next fetch will fetch updated authors. This might be a problem in the future is some cases, depending on how you fetch data. If it is a problem, just rework this to add newAuthor to the array, like allAuthors: [newAuthor]
if(!existingAuthors.?length) {
return null
}
The next thing is that we need to compare the new book's author with existing authors to see if a new author was added.
//continued
const hasAuthor = existingAuthors.find(author => author.id === response.data.createBook.id)
//Double check response.data.createBook.id. I don't know what is returned from response
//If author already exists, do nothing
if(hasAuthor) {
return null
}
//Get a new author. Make sure that this is the same as the one you fetch with ALL_AUTHORS.
const newAuthor = {
...response.data.createBook.author //Check this
}
cache.writeQuery({
query: ALL_AUTHORS,
//variables: {}
data: {
allAuthors: [newAuthor, ...existingAuthors.allAuthors]
},
});
Then continue the same with ALL_GENRES
Note:
If you called ALL_GENERES or ALL_BOOKS with variables, you MUST put the SAME variables in the write query and read query. Otherwise Apollo wont know what to update
Double check if you are comparing numbers or strings for authors and genres
Double check all of the variables I added, they might be named different at your end.
Use console.log to check incoming variables
You can probably make this in less lines. There are multiple ways to update cache
If it doesn't work, console.log cache after the update and see what exactly did apollo do with the update (It could be missing data, or wrong variables.)
Add more checks to ensure some cases like: response.data returned null, authors already fetched but there are none, etc...
Related
I'm implementing a pretty advanced table (using React-Table) for a large, complex set of data. I started by following Apollo's guide implementing offset-based pagination, I got sorting to work as well. What I'm stuck at is combining that with server-side filtering.
My definition of InMemoryCache looks like this - I'm querying a field Targets:
const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
Targets: {
...offsetLimitPagination(),
read(existing, { args }): any {
if (args && args.limit !== undefined && args.offset !== undefined) {
return existing && existing.slice(args.offset, args.offset + args.limit);
}
},
},
},
},
},
});
which is pretty much the outcome for what the guide told me to do for Pagination. My component queries backend:
const { networkStatus, error, data, fetchMore, refetch } = useQuery<GetTargetsQuery, GetTargetsQueryVariables>(GET_TARGETS, {
...
notifyOnNetworkStatusChange: true,
variables: {
limit: tableState.pageSize,
offset: tableState.pageNumber * tableState.pageSize,
orderBy: tableState.orderBy,
where: {
_and: [initialFilters, queryFilters],
},
},
});
The issue is, when I'm modifying the queryFilters and data gets refetched, I'm seeing correct data in my Network Tab in browser, but my component still reads the old data from the cache. It seems like the offsetLimitPagination hook is not exactly crafted for incorporating filtering(?).
I can't use the React-Table's built-in filtering, as it only operates on the data that has been queried (which in my case is part of the entire set). How do I modify my InMemoryCache to overwrite the data in cache if there are new filters set? Or is there a better way to tackle this or better question to ask to get this done?
To clarify the keyArgs, you'd want to specify the keys that would connect one cache from another.
In your case, you have the variables limit, offset, orderBy & where.
So in the case of changing the limit & offset, you do not want Apollo to create a separate cache when limit & offset changes. So you leave that out of the keyArgs.
The keyArgs that you want to watch is your changes with orderBy & where. From my understanding, the new keyArgs are what you want to base your cache from. So if anything in the orderBy & where changes, you'd want Apollo to treat it as somewhat of a separate dataset.
Also, offsetLimitPagination accepts a keyArgs prop.
So in terms of nested variables like the variable where, you can configure it in the keyArgs as a nested value so Apollo has an idea of what the inside of those values are.
The nested array syntax applies to the previous argument value (where).
const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
Targets: {
...offsetLimitPagination(["orderBy", "where", ["_and"]]),
read(existing, { args }): any {
if (args && args.limit !== undefined && args.offset !== undefined) {
return existing && existing.slice(args.offset, args.offset + args.limit);
}
},
},
},
},
},
});
I have new data that I want to insert in my array of blog (My collection look like this - shown below):-
{
_id: 0,
// other vars here...,
blog: [
{
_id: 0,
title: 'Article 1'
},
// add new article here
]
}
Right now I'm able to add new article in my blog array with the code shown below:-
const query = {}
const update = { $push: { blog: inputData } }
const options = { upsert: true }
All.updateOne(query, update, options)
.then(result => {
res.redirect(`/article/${result.upsertedId}`) // read MongoDB documentation that I can get the newly inserted data ID by using result.upsertedId
})
.catch(err => res.redirect(`/`)
But I can't seem to get the ID of the newly inserted data into my collection. Using result.upsertedId returns me undefined instead. According to the code above, I need the ID in order to redirect user to the new article page of that ID.
Here's the documentation telling that it will return upsertedId updateOne
this is the result I get when console.log it (There's no upsertedId anywhere in it)
Print your result to confirm an ID is being received, your result is likely something similar to this:
{ "acknowledged" : true, "matchedCount" : 1, "modifiedCount" : 1 }
You could try using collection.findOneAndUpdate and retrieve upsertedId from that result
You need to pass new: true in the option to get the id of the updated document
i.e
const query = {}
const update = { $push: { blog: inputData } }
const options = { upsert: true, new: true }
All.updateOne(query, update, options)
.then(result => {
//result is your upserted document
console.log("Upserted document : ", result);// try getting your upserted _id
//from this result
res.redirect(`/article/${result.upsertedId}`) // change //result.upsertedId into result._id or result.id whatever you get into //your result
})
.catch(err => res.redirect(`/`)
It is not applicable for nested documents.
What you are doing is an update operation.
Doc says
The _id value of the document inserted by an upsert operation. This value is only present when the upsert option is enabled and the update query does not match any documents.
And your query results in update operation. That's why you are getting nModified.
One option is that you can use findOneAndUpdate with option returnNewDocument option true.
I'm trying to use react-query useInfiniteScroll with a basic API, such as the cocktaildb or pokeapi.
useInfiniteQuery takes two parameters: a unique key for the cache and a function it has to run.
It returns a data object, and also a fetchMore function. If fetchMore is called - through an intersection observer for exemple -, useInfiniteQuery call its parameter function again, but with an updated payload thanks to a native callback getFetchMore().
In the official documentation, getFetchMore automatically takes two argument: the last value returned, and all the values returned.
Based on this, their demo takes the value of the previous page number sent by getFetchMore, and performs a new call with an updated page number.
But how can I perform the same kind of thing with a basic api that only return a json?
Here is the official demo code:
function Projects() {
const fetchProjects = (key, cursor = 0) =>
fetch('/api/projects?cursor=' + cursor)
const {
status,
data,
isFetching,
isFetchingMore,
fetchMore,
canFetchMore,
} = useInfiniteQuery('projects', fetchProjects, {
getFetchMore: (lastGroup, allGroups) => lastGroup.nextCursor,
})
infinite scrolling relies on pagination, so to utilize this component, you'd need to somehow track what page you are on, and if there are more pages. If you're working with a list of elements, you could check to see if less elements where returned in your last query. For example, if you get 5 new items on each new fetch, and on the last fetch you got only 4, you've probably reached the edge of the list.
so in that case you'd check if lastGroup.length < 5, and if that returns true, return false (stop fetching more pages).
In case there are more pages to fetch, you'd need to return the number of the current page from getFetchMore, so that the query uses it as a parameter. One way of measuring what page you might be on would be to count how many array exist inside the data object, since infiniteQuery places each new page into a separate array inside data. so if the length of data array is 1, it means you have fetched only page 1, in which case you'd want to return the number 2.
final result:
getFetchMore: (lastGroup, allGroups) => {
const morePagesExist = lastGroup?.length === 5
if (!morePagesExist) return false;
return allGroups.length+1
}
now you just need to use getMore to fetch more pages.
The steps are:
Waiting for useInfiniteQuery to request the first group of data by default.
Returning the information for the next query in getNextPageParam.
Calling fetchNextPage function.
Reference https://react-query.tanstack.com/guides/infinite-queries
Example 1 with rest api
const fetchProjects = ({ pageParam = 0 }) =>
fetch('/api/projects?cursor=' + pageParam)
const {
data,
isLoading,
fetchNextPage,
hasNextPage,
} = useInfiniteQuery('projects', fetchProjects, {
getNextPageParam: (lastPage) => {
// lastPage signature depends on your api respond, below is a pseudocode
if (lastPage.hasNextPage) {
return lastPage.nextCursor;
}
return undefined;
},
})
Example 2 with graphql query (pseudocode)
const {
data,
fetchNextPage,
isLoading,
} = useInfiniteQuery(
['GetProjectsKeyQuery'],
async ({ pageParam }) => {
return graphqlClient.request(GetProjectsQuery, {
isPublic: true, // some condition/variables if you have
first: NBR_OF_ELEMENTS_TO_FETCH, // 10 to start with
cursor: pageParam,
});
},
{
getNextPageParam: (lastPage) => {
// pseudocode, lastPage depends on your api respond
if (lastPage.projects.pageInfo.hasNextPage) {
return lastPage.projects.pageInfo.endCursor;
}
return undefined;
},
},
);
react-query will create data which contains an array called pages. Every time you call api with the new cursor/page/offset it will add new page to pages. You can flatMap data, e.g:
const projects = data.pages.flatMap((p) => p.projects.nodes)
Call fetchNextPage somewhere in your code when you want to call api again for next batch, e.g:
const handleEndReached = () => {
fetchNextPage();
};
Graphql example query:
add to your query after: cursor:
query GetProjectsQuery($isPublic: Boolean, $first: Int, $cursor: Cursor) {
projects(
condition: {isPublic: $isPublic}
first: $first
after: $cursor
) ...
So, I have a a state of users which gets filled up with an array of Objects from the backend. Say, I want to update the details of a specific User Object from that array and then update the current state with the updated details. How do I go about doing that?
What I currently do is force a refetch from the server once the update request is successful to update the state again but I was wondering if there's a better way to do it without refetching from the server again.
For example, on the code below. I wanted to update PersonTwo's age.
state = {
users: [
{
name: PersonOne,
age: 1
},
{
name: PersonTwo,
age: 1
}
]
}
Let's say you have id field in your object. You can use map function to return new array and save it to your state.
updateUser(userId){
const updatedUsers = this.state.users.map( (user) => {
if(userId !== user.id) {
return user;
}
return {
...user,
//write some updated info here, for example:
age: 40
}
}):
this.setState({
users: updatedUsers
});
});
The best way is to do as you are doing right now that first send the values to the server and then fetch the latest data from the database because if you update the view before successful update on the server you might end up showing which does not exist actually. There is a possibility that your server did not accept that change but somehow you updated the view on user request. It is not a good thing, but if you still want to do that follow the steps below:
step-1: check the array index on which the update of user data is done and then
const newArray = Array.from(this.state.oldArray);
newArray[i] = 'test';
step-2: assign this new array to the old one:
this.setState({oldArray: newArray})
You can use functional setState as a success callback of POST/PUT call.
this.setState(prevState => {
users: ... // operation using prevState.users
})
I have the following object, from which I want to remove one comment.
msgComments = {
comments: [
{ comment:"2",
id:"0b363677-a291-4e5c-8269-b7d760394939",
postId:"e93863eb-aa62-452d-bf38-5514d72aff39" },
{ comment:"1",
id:"e88f009e-713d-4748-b8e8-69d79698f072",
postId:"e93863eb-aa62-452d-bf38-5514d72aff39" }
],
email:"test#email.com",
id:"e93863eb-aa62-452d-bf38-5514d72aff39",
post:"test",
title:"test"
}
The action creator hits the api delete function with the commentId:
// DELETE COMMENT FROM POST
export function deleteComment(commentId) {
return function(dispatch) {
axios.post(`${API_URL}/datacommentdelete`, {
commentId
},{
headers: { authorization: localStorage.getItem('token') }
})
.then(result => {
dispatch({
type: DELETE_COMMENT,
payload: commentId
});
})
}
}
My api deletes the comment and I send the comment id to my Reducer, this is working fine to this point, api works and comment is deleted. The problem is updating the state in the reducer. After much trial and error at the moment I am trying this.
case DELETE_COMMENT:
console.log('State In', state.msgComments);
const msgCommentsOne = state.msgComments;
const msgCommentsTwo = state.msgComments;
const deleteIndexComment = state.msgComments.data.comments
.findIndex(elem => elem.id === action.payload );
const newComments = [
...msgCommentsTwo.data.comments.slice(0, deleteIndexComment),
...msgCommentsTwo.data.comments.slice(deleteIndexComment + 1)
];
msgCommentsOne.data.comments = newComments;
console.log('State Out', msgCommentsOne);
return {...state, msgComments: msgCommentsOne};
Both state in AND state out return the same object, which has the appropriate comment deleted which I find puzzling.
Also the component is not updating (when I refresh the comment is gone as a new api call is made to return the updated post.
Everything else seems to work fine, the problem seems to be in the reducer.
I have read the other posts on immutability that were relevant and I am still unable to work out a solution. I have also researched and found the immutability.js library but before I learn how to use that I wanted to find a solution (perhaps the hard way, but I want to understand how this works!).
First working solution
case DELETE_COMMENT:
const deleteIndexComment = state.msgComments.data.comments
.findIndex(elem => elem.id === action.payload);
return {
...state, msgComments: {
data: {
email: state.msgComments.data.email,
post: state.msgComments.data.post,
title: state.msgComments.data.title,
id: state.msgComments.data.id,
comments: [
...state.msgComments.data.comments.slice(0, deleteIndexComment),
...state.msgComments.data.comments.slice(deleteIndexComment + 1)
]
}
}
};
Edit:
Second working solution
I have found a second far more terse solution, comments welcome:
case DELETE_COMMENT:
const deleteIndexComment = state.msgComments.data.comments
.findIndex(elem => elem.id === action.payload);
return {
...state, msgComments: {
data: {
...state.msgComments.data,
comments: [
...state.msgComments.data.comments.slice(0, deleteIndexComment),
...state.msgComments.data.comments.slice(deleteIndexComment + 1)
]
}
}
};
That code appears to be directly mutating the state object. You've created a new array that has the deleted item filtered out, but you're then directly assigning the new array to msgCommentsOne.data.comments. The data field is the same one that was already in the state, so you've directly modified it. To correctly update data immutably, you need to create a new comments array, a new data object containing the comments, a new msgComments object containing the data, and a new state object containing msgComments. All the way up the chain :)
The Redux FAQ does give a bit more information on this topic, at http://redux.js.org/docs/FAQ.html#react-not-rerendering.
I have a number of links to articles talking about managing plain Javascript data immutably, over at https://github.com/markerikson/react-redux-links/blob/master/immutable-data.md . Also, there's a variety of utility libraries that can help abstract the process of doing these nested updates immutably, which I have listed at https://github.com/markerikson/redux-ecosystem-links/blob/master/immutable-data.md.