I want to update a record in a database. So far EditTodoList is updating the corresponding record in the db perfectly. My problem is with the optimistic update part the changes only show after refetching "TODO_LIST"+projectId query, even though this useMutation setup I have is supposed to be reflect the changes instantly in the ui.
I didn't face this problem when creating the todoList because in onMutate I just needed to do this queryClient.setQueryData("TODO_LIST"+projectId,(old)=>[...old,newTodoList]), but this case I need to change old array.
return useMutation(
EditTodoList,
{
onSuccess:(newProject)=>queryClient.setQueryData("TODO_LIST"+projectId,(old)=>[...old,newProject]),
onMutate:(values)=>{
const {title,projectId,todos,todoListToBeEdited}=values
queryClient.cancelQueries("TODO_LIST"+projectId)
const previousData=queryClient.getQueryData("TODO_LIST"+projectId)
//update query data
let temp=[...previousData]
const targetTodoList=temp.filter(tds=>tds.id === todoListToBeEdited.id)[0]
const targetTodoListIndex=temp.indexOf(targetTodoList)
const newTodoList = TodoListModel(projectId,title,todoListToBeEdited.orderInProject)
temp[targetTodoListIndex] = newTodoList
//this is where I need help , the setQueryData seems to ignore temp even though temp has the latest up-to-date data
queryClient.setQueryData("TODO_LIST"+projectId,(old)=>[...temp])
return queryClient.setQueryData("TODO_LIST"+projectId,previousData)
},
onError:(err,values,rollBack)=>rollBack()
}
)
the suggested way is:
optimistically update in onMutate with setQueryData. If you have a list, yes, it means iterating over that list and finding the right element to update.
return something to rollback
rollback on error
invalidate onSettled to refetch the query in any case to be in-sync with the server state. This is optional, if your optimistic update is "perfect" (like just toggling a boolean), there's sometimes no need to do that.
There's a whole section about optimistic updates in the docs, and there are also codesandbox examples
/edit: sorry, I missed the comment about the setQueryData. If you have computed the next data already, you don't need to use the functional updater. This should work:
queryClient.setQueryData("TODO_LIST"+projectId, temp)
Related
I have implemented infinite scroll on a project that is using React Query library.
So far so good. Everything works as expected using the useInfiniteScroll hook
One issue that I am encountering is the caching mechanism of this library.
If I query for a resource, ex: GET /posts/, will trigger a GET /posts/?page=0, scroll a bit down, get the next results, GET /posts/?page=1 which is totally fine. If I add search parameter to the page will do a GET /posts/?filter=someValue&page=0. All fine... but when I remove the filter from the search it will automatically do GET /posts/?page=0 & GET /posts/?page=1
A solution is to remove the query from the cache using the remove() method from the query itself. But this means that I have to manually do it for each query.
I would like to get a better solution that will cover all cases. I have a queryWrapper where I want to handle this.
I tried using the queryClient instances invalidateQueries and resetQueries but none seems to be able to remove it from the cache...
In the examples below I keep a ref for the params, if they are changed I can trigger the query to reset via useLayoutEffect hook. This part works as expected
invalidateQueries attempt
queryClient.invalidateQueries(
[queryKey, JSON.stringify(paramsRef.current)],
{
exact: true,
refetchActive: false,
refetchInactive: false
},
{ cancelRefetch: true }
);
resetQueries attempt
queryClient
.resetQueries([queryKey, JSON.stringify(paramsRef.current)], {
refetchPage: (page, index) => index === 0
})
I even tried the queryClient.clear() method which should remove all existing queries from the cache but still the page number somehow remains cached... I access the queryClient using useQueryClient hook. If I inspect it, I can see the queries inside.
Can someone please help me to sort this cache issue
Thanks!
but when I remove the filter from the search it will automatically do GET /posts/?page=0 & GET /posts/?page=1
This is the default react-query refetching behaviour: An infinite query only has one cache entry, with multiple pages. When a refetch happens, all currently visible pages will be refetched. This is documented here.
This is necessary for consistency, because if for example one entry was removed from page 1, and we wouldn't refetch page 2, the first entry on page 2 (old) would be equal to the last entry of page 1 (new).
Now the real question is, do you want a refetch or not when the filter changes? If not, then setting a staleTime would be the best solution. If you do want a refetch, refetching all pages is the safest option. Otherwise, you can try to remove all pages but the first one with queryClient.setQueryData when your query unmounts. react-query won't do that automatically because why would we delete data that the user has scrolled down to to see already.
Also note that for imperative refetches, like invalidateQueries, you have the option to pass a refetchPage function, where you can return a boolean for each page to indicate if it should be refetched or not. This is helpful if you only update one item and only want to refetch the page where this item resides. This is documented here.
I have a table that has the data of a different fetch per row. Each row will also be able to trigger a fetch independently of the other rows. If each row is a component that has a button to refetch, the implementation is easy. I just put call useQuery per row (a component).
The problem is filtering and sorting, because the fetched data is only on the rows so there is no global list containing all the information.
I tried to implement it with useQueries or just use components. But I only come up with gnarly solutions. One of those would be to have row components that call useQuery and also set a value (useState) on a parent. This looks like I'm setting the same data at 2 levels and if I get a big table that virtualizes rows, the useQuery inside the components are not triggered because the component is not created.
The problem is hard to describe, so if there is some part that needs clarification please let me know.
===
This is not the real code, just code to try to represent what I have:
function Row({cell}) {
const [fetch, setFetch] = useState(null);
const query = useQuery(["somekey", refetch], fetchFn(cell.url))
const refetch = () =>setFetch(Date.now())
return (<div onClick={refetch}>{query.data.value}</div)
}
function Table({array}) {
return (<div>
{array.map(el => <Row cell={el}/>})}
</div>)
}
This is generally not the easiest to implement with react-query, because it doesn't have a normalized cache, but here is how I would approach the problem:
have one list query: useQuery('myList')
each row has it's own query: useQuery('myList', id)
I would use initialData and staleTime to pre-populate the detail query with data from the list query and avoid unnecessary fetches when each row mounts.
when you refetch a single row, you:
refetch the one detail query, as you said, easy
onSuccess update the query data of the list with the new data from that detail query (with queryClient.setQueryData)
of course, all of this would be easier if you just had one list query and operate everything on that, but than you can't do individual refetches of one row - you'd always refetch the whole list. Usually, that's not too bad either. With the above approach, you get a bit into "syncing" state between the detail and the list query - also not so nice.
I want to do some side effects like setState and update context after the data is fetched. However, the onSuccess will not be executed when the data is in cache. Also useEffect doesn't work because if the data is cached, it doesn't change from undefined to the real data. Therefore it doesn't get trigger either. What's the best way of doing this? Thanks
My usecase is to extract some values from the data returned from useQuery and set a new state on those.
usually, they’re shouldn’t be a need to be a need to copy state from react-query into local state. This will just lead to duplication of the source of truth. It is best to keep them separated, so that you can also profit from background updates.
If you want to transform the data or subscribe to parts of the data, use the select option of useQuery:
const { data } = useQuery(key, fn, { select: data => data.map(...) })
Alternatively, you can compute some new data depending on the returned data with useMemo, e.g.:
const { data } = useQuery(...)
const articles = useMemo(() => data?.map(...), [data])
// work with articles from here on
You can also put that nicely in a custom hook.
I've just started using Recoil on a new project and I'm not sure if there is a better way to accomplish this.
My app is an interface to basically edit a JSON file containing an array of objects. It reads the file in, groups the objects based on a specific property into tabs, and then a user can navigate the tabs, see the few hundred values per tab, make changes and then save the changes.
I'm using recoil because it allows me to access the state of each input from anywhere in my app, which makes saving much easier - in theory...
In order to generate State for each object in the JSON file, I've created an component that returns null and I map over the initial array, create the component, which creates Recoil state using an AtomFamily, and then also saves the ID to another piece of Recoil state so I can keep a list of everything.
Question 1 Is these a better way to do this? The null component doesn't feel right, but storing the whole array in a single piece of state causes a re-render of everything on every keypress.
To Save the data, I have a button which calls a function. That function just needs to get the ID's, loop through them, get the state of each one, and push them into an Array. I've done this with a Selector too, but the issue is that I can't call getRecoilValue from a function because of the Rules of Hooks - but if I make the value available to the parent component, it again slows everything right down.
Question 2 I'm pretty sure I'm missing the right way to think about storing state and using hooks, but I haven't found any samples for this particular use case - needing to generate the state up front, and then accessing it all again on Save. Any guidance?
Question 1
Get accustomed to null-rendering components, you almost can't avoid them with Recoil and, more in general, this hooks-first React world 😉
About the useRecoilValue inside a function: you're right, you should leverage useRecoilCallback for that kind of task. With useRecoilCallback you have a central point where you can get and set whatever you want at once. Take a look at this working CodeSandbox where I tried to replicate (the most minimal way) your use-case. The SaveData component (a dedicated component is not necessary, you could just expose the Recoil callback without creating an ad-hoc component) is the following
const SaveData = () => {
const saveData = useRecoilCallback(({ snapshot }) => async () => {
const ids = await snapshot.getPromise(carIds);
for (const carId of ids) {
const car = await snapshot.getPromise(cars(carId));
const carIndex = db.findIndex(({ id }) => id === carId);
db[carIndex] = car;
}
console.log("Data saved, new `db` is");
console.log(JSON.stringify(db, null, 2));
});
return <button onClick={saveData}>Save data</button>;
};
as you can see:
it retrieves all the ids through const ids = await snapshot.getPromise(carIds);
it uses the ids to retrieve all the cars from the atom family const car = await snapshot.getPromise(cars(carId));
All of that in a central point, without hooks and without subscribing the component to atoms updates.
Question 2
There are a few approaches for your use case:
creating empty atoms when the app starts, updating them, and saving them in the end. It's what my CodeSandbox does
doing the same but initializing the atoms through RecoilRoot' initialState prop
being updated by Recoil about every atom change. This is possible with useRecoilTransactionObserver but please, note that it's currently marked as unstable. A new way to do the same will be available soon (I guess) but at the moment it's the only solution
The latter is the "smarter" approach but it really depends on your use case, it's up to you to think if you really want to update the JSON at every atom' update 😉
I hope it helps, let me know if I missed something 😊
I have a mutation (UploadTransaction) returning certain list of certain object named Transaction.
#import "TransactionFields.gql"
mutation UploadTransaction($files: [Upload!]!) {
uploadFile(files: $files){
transactions {
...TransactionFields
}
}
}
Transaction returned from backend (graphene) has id and typename field. Hence it should automatically update Transaction in the cache. In chrome dev tools for Apollo, I can see new transactions:
I also have a query GetTransactions fetching all Transaction objects.
#import "TransactionFields.gql"
query GetTransactions {
transactions {
...TransactionFields
}
}
However I don't see newly added Transaction being returned by the query. During initial load, Apollo client loaded 292 transactions which it shows under ROOT_QUERY. It keeps returning same 292 transactions. UploadTransaction mutation add new object of type "Transaction" in cache in dev-tools without affecting ROOT_QUERY in dev-tools or my query in code.
TransactionFields.gql is
fragment TransactionFields on Transaction {
id
timestamp
description
amount
category {
id
name
}
currency
}
Any idea what am I doing wrong? I am new to apollo client and graphql
From the docs:
If a mutation updates a single existing entity, Apollo Client can automatically update that entity's value in its cache when the mutation returns. To do so, the mutation must return the id of the modified entity, along with the values of the fields that were modified. Conveniently, mutations do this by default in Apollo Client...
If a mutation modifies multiple entities, or if it creates or deletes entities, the Apollo Client cache is not automatically updated to reflect the result of the mutation. To resolve this, your call to useMutation can include an update function.
If you have a query that returns a list of entities (for example, users) and then create or delete a user, Apollo has no way of knowing that the list should be updated to reflect your mutation. The reason for this is two fold
There's no way for Apollo to know what a mutation is actually doing. All it knows is what fields you are requesting and what arguments you are passing those fields. We might assume that a mutation that includes words like "insert" or "create" is inserting something on the backend but that's not a given.
There's no way to know that inserting, deleting or updating a user should update a particular query. Your query might be for all users with the name "Bob" -- if you create a user with the name "Susan", the query shouldn't be updated to reflect that addition. Similarly, if a mutation updates a user, the query might need to be updated to reflect the change. Whether it should or not ultimately boils down to business rules that only your server knows about.
So, in order to update the cache, you have two options:
Trigger a refetch of the relevant queries. You can do this by either passing a refetchQueries option to your useMutation hook, or by manually calling refetch on those queries. Since this requires one or more additional requests to your server, it's the slower and more expensive option but can be the right option when A) you don't want to inject a bunch of business logic into your client or B) the updates to the cache are complicated and extensive.
Provide an update function to your useMutation hook that tells Apollo how to update the cache based on the results of the mutation. This saves you from making any additional requests, but does mean you have to duplicate some business logic between your server and your client.
The example of using update from the docs:
update (cache, { data: { addTodo } }) {
const { todos } = cache.readQuery({ query: GET_TODOS });
cache.writeQuery({
query: GET_TODOS,
data: { todos: todos.concat([addTodo]) },
});
}
Read the docs for additional details.