I'm implementing a pretty advanced table (using React-Table) for a large, complex set of data. I started by following Apollo's guide implementing offset-based pagination, I got sorting to work as well. What I'm stuck at is combining that with server-side filtering.
My definition of InMemoryCache looks like this - I'm querying a field Targets:
const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
Targets: {
...offsetLimitPagination(),
read(existing, { args }): any {
if (args && args.limit !== undefined && args.offset !== undefined) {
return existing && existing.slice(args.offset, args.offset + args.limit);
}
},
},
},
},
},
});
which is pretty much the outcome for what the guide told me to do for Pagination. My component queries backend:
const { networkStatus, error, data, fetchMore, refetch } = useQuery<GetTargetsQuery, GetTargetsQueryVariables>(GET_TARGETS, {
...
notifyOnNetworkStatusChange: true,
variables: {
limit: tableState.pageSize,
offset: tableState.pageNumber * tableState.pageSize,
orderBy: tableState.orderBy,
where: {
_and: [initialFilters, queryFilters],
},
},
});
The issue is, when I'm modifying the queryFilters and data gets refetched, I'm seeing correct data in my Network Tab in browser, but my component still reads the old data from the cache. It seems like the offsetLimitPagination hook is not exactly crafted for incorporating filtering(?).
I can't use the React-Table's built-in filtering, as it only operates on the data that has been queried (which in my case is part of the entire set). How do I modify my InMemoryCache to overwrite the data in cache if there are new filters set? Or is there a better way to tackle this or better question to ask to get this done?
To clarify the keyArgs, you'd want to specify the keys that would connect one cache from another.
In your case, you have the variables limit, offset, orderBy & where.
So in the case of changing the limit & offset, you do not want Apollo to create a separate cache when limit & offset changes. So you leave that out of the keyArgs.
The keyArgs that you want to watch is your changes with orderBy & where. From my understanding, the new keyArgs are what you want to base your cache from. So if anything in the orderBy & where changes, you'd want Apollo to treat it as somewhat of a separate dataset.
Also, offsetLimitPagination accepts a keyArgs prop.
So in terms of nested variables like the variable where, you can configure it in the keyArgs as a nested value so Apollo has an idea of what the inside of those values are.
The nested array syntax applies to the previous argument value (where).
const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
Targets: {
...offsetLimitPagination(["orderBy", "where", ["_and"]]),
read(existing, { args }): any {
if (args && args.limit !== undefined && args.offset !== undefined) {
return existing && existing.slice(args.offset, args.offset + args.limit);
}
},
},
},
},
},
});
Related
I'm executing this mutation in my NewBook component:
const [addBook] = useMutation(ADD_BOOK, {
update: (cache, response) => {
cache.updateQuery({ query: ALL_BOOKS }, ({ allBooks }) => {
return { allBooks: allBooks.concat(response.data.addBook) };
});
},
refetchQueries: [{ query: ALL_AUTHORS }, { query: ALL_GENRES }],
options: {
awaitRefetchQueries: true,}});
Instead of having to refetch those two queries, I'd like to update them like ALL_BOOKS - but could not find any example in the docs. Does anyone know a way to accomplish that?
Thank you.
What you need to do is make multiple cache updates based on response data.
Once you add your new book to the query, the next step is to fetch all authors.
cache.updateQuery({ query: ALL_BOOKS }, ({ allBooks }) => {
return { allBooks: allBooks.concat(response.data.addBook) };
});
//Get all authors
const existingAuthors = cache.readQuery({
query: ALL_AUTHORS,
//variables: {}
});
//If we never called authors, do nothing, as the next fetch will fetch updated authors. This might be a problem in the future is some cases, depending on how you fetch data. If it is a problem, just rework this to add newAuthor to the array, like allAuthors: [newAuthor]
if(!existingAuthors.?length) {
return null
}
The next thing is that we need to compare the new book's author with existing authors to see if a new author was added.
//continued
const hasAuthor = existingAuthors.find(author => author.id === response.data.createBook.id)
//Double check response.data.createBook.id. I don't know what is returned from response
//If author already exists, do nothing
if(hasAuthor) {
return null
}
//Get a new author. Make sure that this is the same as the one you fetch with ALL_AUTHORS.
const newAuthor = {
...response.data.createBook.author //Check this
}
cache.writeQuery({
query: ALL_AUTHORS,
//variables: {}
data: {
allAuthors: [newAuthor, ...existingAuthors.allAuthors]
},
});
Then continue the same with ALL_GENRES
Note:
If you called ALL_GENERES or ALL_BOOKS with variables, you MUST put the SAME variables in the write query and read query. Otherwise Apollo wont know what to update
Double check if you are comparing numbers or strings for authors and genres
Double check all of the variables I added, they might be named different at your end.
Use console.log to check incoming variables
You can probably make this in less lines. There are multiple ways to update cache
If it doesn't work, console.log cache after the update and see what exactly did apollo do with the update (It could be missing data, or wrong variables.)
Add more checks to ensure some cases like: response.data returned null, authors already fetched but there are none, etc...
I'm moving from plain redux thunks to RTK query and I've run into a problem. After first query, the response contains a field with a value (kinda token or cursor) and I need to use it as argument in the next fetch of the same query and so on. With plain thunks I just read the token value from the store with a selector and use it as an argument
Something like this, but of course it causes an error:
const { data } = useUsersQuery({
someToken: data.someToken,
});
How can I achieve it?
UPDATE
I solved it with useLazyQuery():
const [trigger, { data }] = useLazyUsersQuery();
trigger({
someToken: data?.someToken,
});
It looks ugly, though
You can just use skipToken or the skip option:
const { data } = useUsersQuery({
someToken: firstResult.data.someToken,
}, {
skip: !firstResult.isSuccess
});
or
const { data } = useUsersQuery(firstResult.isSuccess ? {
someToken: firstResult.data.someToken,
} : skipToken);
I'm trying to use react-query useInfiniteScroll with a basic API, such as the cocktaildb or pokeapi.
useInfiniteQuery takes two parameters: a unique key for the cache and a function it has to run.
It returns a data object, and also a fetchMore function. If fetchMore is called - through an intersection observer for exemple -, useInfiniteQuery call its parameter function again, but with an updated payload thanks to a native callback getFetchMore().
In the official documentation, getFetchMore automatically takes two argument: the last value returned, and all the values returned.
Based on this, their demo takes the value of the previous page number sent by getFetchMore, and performs a new call with an updated page number.
But how can I perform the same kind of thing with a basic api that only return a json?
Here is the official demo code:
function Projects() {
const fetchProjects = (key, cursor = 0) =>
fetch('/api/projects?cursor=' + cursor)
const {
status,
data,
isFetching,
isFetchingMore,
fetchMore,
canFetchMore,
} = useInfiniteQuery('projects', fetchProjects, {
getFetchMore: (lastGroup, allGroups) => lastGroup.nextCursor,
})
infinite scrolling relies on pagination, so to utilize this component, you'd need to somehow track what page you are on, and if there are more pages. If you're working with a list of elements, you could check to see if less elements where returned in your last query. For example, if you get 5 new items on each new fetch, and on the last fetch you got only 4, you've probably reached the edge of the list.
so in that case you'd check if lastGroup.length < 5, and if that returns true, return false (stop fetching more pages).
In case there are more pages to fetch, you'd need to return the number of the current page from getFetchMore, so that the query uses it as a parameter. One way of measuring what page you might be on would be to count how many array exist inside the data object, since infiniteQuery places each new page into a separate array inside data. so if the length of data array is 1, it means you have fetched only page 1, in which case you'd want to return the number 2.
final result:
getFetchMore: (lastGroup, allGroups) => {
const morePagesExist = lastGroup?.length === 5
if (!morePagesExist) return false;
return allGroups.length+1
}
now you just need to use getMore to fetch more pages.
The steps are:
Waiting for useInfiniteQuery to request the first group of data by default.
Returning the information for the next query in getNextPageParam.
Calling fetchNextPage function.
Reference https://react-query.tanstack.com/guides/infinite-queries
Example 1 with rest api
const fetchProjects = ({ pageParam = 0 }) =>
fetch('/api/projects?cursor=' + pageParam)
const {
data,
isLoading,
fetchNextPage,
hasNextPage,
} = useInfiniteQuery('projects', fetchProjects, {
getNextPageParam: (lastPage) => {
// lastPage signature depends on your api respond, below is a pseudocode
if (lastPage.hasNextPage) {
return lastPage.nextCursor;
}
return undefined;
},
})
Example 2 with graphql query (pseudocode)
const {
data,
fetchNextPage,
isLoading,
} = useInfiniteQuery(
['GetProjectsKeyQuery'],
async ({ pageParam }) => {
return graphqlClient.request(GetProjectsQuery, {
isPublic: true, // some condition/variables if you have
first: NBR_OF_ELEMENTS_TO_FETCH, // 10 to start with
cursor: pageParam,
});
},
{
getNextPageParam: (lastPage) => {
// pseudocode, lastPage depends on your api respond
if (lastPage.projects.pageInfo.hasNextPage) {
return lastPage.projects.pageInfo.endCursor;
}
return undefined;
},
},
);
react-query will create data which contains an array called pages. Every time you call api with the new cursor/page/offset it will add new page to pages. You can flatMap data, e.g:
const projects = data.pages.flatMap((p) => p.projects.nodes)
Call fetchNextPage somewhere in your code when you want to call api again for next batch, e.g:
const handleEndReached = () => {
fetchNextPage();
};
Graphql example query:
add to your query after: cursor:
query GetProjectsQuery($isPublic: Boolean, $first: Int, $cursor: Cursor) {
projects(
condition: {isPublic: $isPublic}
first: $first
after: $cursor
) ...
I have a problem in my meteor/react/apollo (with boost) project. When I query data from the server, it adds __typename to every object and subobject in my query but it my case it creates a major issue as I normally reuse these data to send them to other mutation. Now the other mutation tell me there is an error because the __typename field is not defined in my graphql schema.
I tried to fix by adding the addTypename: false field to my apollo client but it didn't change anything (note I am using apollo boost, that may be why it is not sworking) :
const client = new ApolloClient({
uri: Meteor.absoluteUrl('graphql'),
addTypename: false,
request: operation =>
operation.setContext(() => ({
headers: {
authorization: Accounts._storedLoginToken()
}
}))
})
Also it seems than even if it worked it is not very optimized. It seems to me very problematic that a field is added to the query results and I am surprised not to find any clear solution online. Some proposed solution where :
filter manually on client side
add middleware to apollo
add the __typename field to all my schemas...
but none of them seem to fit the 'simplcity' apollo is suppose to bring for queries. I hope there is a simpler, more logical solution provided, but so far, could not find any.
Even if using apollo-client and not apollo-boost, you shouldn't set addTypename to false unless you have a compelling reason to do so. The __typename field is used by the InMemoryCache to normalize your query results, so omitting it will likely lead to unexpected behavior around caching.
Unfortunately, there is no "silver bullet" to this problem. Requesting a query and then using that query's data as the variable to some other query could be construed as misusing the API. The Type returned by a query and the Input Type used as an argument are completely different things, even if as Javascript objects they share one or more fields. Just like you can't use types and input types interchangeably within a schema, there shouldn't be an expectation that they can be used interchangeably client-side.
That also means that if you're finding yourself in this situation, you may want to take a second look at your schema design. After all, if the data exists on the server already, it should be sufficient to pass in an id for it and retrieve it server-side, and not have to pass in the entire object.
If you're using some query to populate one or more inputs and then using the value of those inputs inside a mutation, then you're presumably already turning the initial query data into component state and then using that in your mutation. In that scenario, __typename or any other non-editable fields probably shouldn't be included as part of component state in the first place.
At the end of day, doing these sort of manipulations will hopefully be the exception, and not the rule. I would create some kind of helper function to "sanitize" your input and move on.
function stripTypenames (value) {
if (Array.isArray(value)) {
return value.map(stripTypenames)
} else if (value !== null && typeof(value) === "object") {
const newObject = {}
for (const property in value) {
if (property !== '__typename') {
newObject[property] = stripTypenames(value[property])
}
}
return newObject
} else {
return value
}
}
From
here. '__typename' can be removed with the following helper function
const cleanedObject = omitDeep(myObject, "__typename")
const omitDeep = (obj, key) => {
const keys = Object.keys(obj);
const newObj = {};
keys.forEach((i) => {
if (i !== key) {
const val = obj[i];
if (val instanceof Date) newObj[i] = val;
else if (Array.isArray(val)) newObj[i] = omitDeepArrayWalk(val, key);
else if (typeof val === 'object' && val !== null) newObj[i] = omitDeep(val, key);
else newObj[i] = val;
}
});
return newObj;
};
const omitDeepArrayWalk = (arr, key) => {
return arr.map((val) => {
if (Array.isArray(val)) return omitDeepArrayWalk(val, key)
else if (typeof val === 'object') return omitDeep(val, key)
return val
})
}
You shouldn't remove the __typename. They are for the cache and also needed for union types. Probably should wait for an update from apollo.
Unfortunately I am also have issue now and I couldn't find a proper solution. First I used to simply delete all the type names, but now I have issue with the union types.
I have the following object, from which I want to remove one comment.
msgComments = {
comments: [
{ comment:"2",
id:"0b363677-a291-4e5c-8269-b7d760394939",
postId:"e93863eb-aa62-452d-bf38-5514d72aff39" },
{ comment:"1",
id:"e88f009e-713d-4748-b8e8-69d79698f072",
postId:"e93863eb-aa62-452d-bf38-5514d72aff39" }
],
email:"test#email.com",
id:"e93863eb-aa62-452d-bf38-5514d72aff39",
post:"test",
title:"test"
}
The action creator hits the api delete function with the commentId:
// DELETE COMMENT FROM POST
export function deleteComment(commentId) {
return function(dispatch) {
axios.post(`${API_URL}/datacommentdelete`, {
commentId
},{
headers: { authorization: localStorage.getItem('token') }
})
.then(result => {
dispatch({
type: DELETE_COMMENT,
payload: commentId
});
})
}
}
My api deletes the comment and I send the comment id to my Reducer, this is working fine to this point, api works and comment is deleted. The problem is updating the state in the reducer. After much trial and error at the moment I am trying this.
case DELETE_COMMENT:
console.log('State In', state.msgComments);
const msgCommentsOne = state.msgComments;
const msgCommentsTwo = state.msgComments;
const deleteIndexComment = state.msgComments.data.comments
.findIndex(elem => elem.id === action.payload );
const newComments = [
...msgCommentsTwo.data.comments.slice(0, deleteIndexComment),
...msgCommentsTwo.data.comments.slice(deleteIndexComment + 1)
];
msgCommentsOne.data.comments = newComments;
console.log('State Out', msgCommentsOne);
return {...state, msgComments: msgCommentsOne};
Both state in AND state out return the same object, which has the appropriate comment deleted which I find puzzling.
Also the component is not updating (when I refresh the comment is gone as a new api call is made to return the updated post.
Everything else seems to work fine, the problem seems to be in the reducer.
I have read the other posts on immutability that were relevant and I am still unable to work out a solution. I have also researched and found the immutability.js library but before I learn how to use that I wanted to find a solution (perhaps the hard way, but I want to understand how this works!).
First working solution
case DELETE_COMMENT:
const deleteIndexComment = state.msgComments.data.comments
.findIndex(elem => elem.id === action.payload);
return {
...state, msgComments: {
data: {
email: state.msgComments.data.email,
post: state.msgComments.data.post,
title: state.msgComments.data.title,
id: state.msgComments.data.id,
comments: [
...state.msgComments.data.comments.slice(0, deleteIndexComment),
...state.msgComments.data.comments.slice(deleteIndexComment + 1)
]
}
}
};
Edit:
Second working solution
I have found a second far more terse solution, comments welcome:
case DELETE_COMMENT:
const deleteIndexComment = state.msgComments.data.comments
.findIndex(elem => elem.id === action.payload);
return {
...state, msgComments: {
data: {
...state.msgComments.data,
comments: [
...state.msgComments.data.comments.slice(0, deleteIndexComment),
...state.msgComments.data.comments.slice(deleteIndexComment + 1)
]
}
}
};
That code appears to be directly mutating the state object. You've created a new array that has the deleted item filtered out, but you're then directly assigning the new array to msgCommentsOne.data.comments. The data field is the same one that was already in the state, so you've directly modified it. To correctly update data immutably, you need to create a new comments array, a new data object containing the comments, a new msgComments object containing the data, and a new state object containing msgComments. All the way up the chain :)
The Redux FAQ does give a bit more information on this topic, at http://redux.js.org/docs/FAQ.html#react-not-rerendering.
I have a number of links to articles talking about managing plain Javascript data immutably, over at https://github.com/markerikson/react-redux-links/blob/master/immutable-data.md . Also, there's a variety of utility libraries that can help abstract the process of doing these nested updates immutably, which I have listed at https://github.com/markerikson/redux-ecosystem-links/blob/master/immutable-data.md.