I was facing a problem for sometime, that was I'm unable to clear cache using RTK query.
I tried in various ways but cache data is not clear.
I used invalidatesTag in my mutation query and it called the api instantly. But in this case I want to refetch multiple api again, but not from any rtk query or mutation. I want to make the api call after some user activity like click.
How can I solve this problem?
I made a separate function where I return api.util.invalidateTags(tag) or api.util.resetApiState().
this is my code-snipet:-
` const api = createApi({.....})
export const resetRtkCache = (tag?: String[]) => {
const api =
if (tag) {
return api.util.invalidateTags(tag)
} else {
return api.util.resetApiState()
}
}`
& I called it using dispatch method from other files
`const reloadData = () => {
dispatch(resetRtkCache())
}`
but here cache data is not removed.I think dispatch funtion is not working. I don't see the api call is being sent to server in the browser network.
But in this case I want to refetch multiple api again, but not from
any rtk query or mutation. I want to make the api call after some user
activity like click. How can I solve this problem?
So if I understood correctly what you want to achieve is to fetch some api that you have in RTK only after some kind of user interaction?
Can't you just define something like this?
const { data } = useGetYourQuery({ skip: skipUntilUserInteraction })
Where skipUntilUserInteraction is a component state variable that you will set to true and update to false based on the user interaction you need? (e.g. a click of a button).
So essentially on component render that specific endpoint will be skipped but will be fetched after the interaction that you want will happen?
wow, you actually asking so many questions at once. but I think you should definitely read the documentation because it covers all the questions you have.
so trying to answer your questions one by one.
I used invalidatesTag in my mutation query and it called the api instantly.
invalidating with Tags is one of the ways to clear the cache.
you should first set the tagTypes for your API then use those tags in mutation queries and tell the RTK query which part of entities you want to clear.
I want to refetch multiple APIs again
you can customize the query inside of a mutation or query like this example and by calling one function query you can send multiple requests at once and if you want to fetch the API again after the cache removed you do not need to do anything because RTK query will do it for you.
I want to make the API call after some user activity like click
every mutation gives u a function that you can pass to onClick like below:
import { use[Mymutation]Mutation } from 'features/api';
const MyComponenet() {
const [myMutationFunc, { isLoading, ...}] = use[Mymutation]Mutation();
return <button type='button' onClick={myMutationFunc}>Click for call mutaion</button>
}
and remember if you set providesTags for your endpoint which you were defined in tagTypes by clicking on the button and firing up the myMutationFunc you will be clearing the cache with those tags.
and if you looking for an optimistic update for the cache you can find your answer in here.
async onQueryStarted({ id, ...patch }, { dispatch, queryFulfilled }) {
const patchResult = dispatch(
api.util.updateQueryData('getPost', id, (draft) => {
Object.assign(draft, patch)
})
)
try {
await queryFulfilled
} catch {
patchResult.undo()
}
}
Related
I have been stumped on a work around for this problem for a while now and was hoping someone could help.
I am currently working on a React UI that sends info to the backend Firebase for a budgeting app.
When the page first loads, I pull in the data using this:
const [incomeSources, setIncomeSources] = React.useState([]);
/////////////////////////////////
// PULL IN DATA FROM FIREBASE //
///////////////////////////////
async function getData() {
const doc = await getDoc(userCollectionRef);
const incomesData = doc.data().incomeSources;
// const expensesData = doc.data().expenses;
// const savingsData = doc.data().savingsAllocation;
// SET STATES //
if (incomesData.length > 0) {
setIncomeSources(incomesData);
}
}
then when I want to add a new object to the state array I use a input and button. The issue I currently have is that I have it set up like this:
async function updateFirebaseDocs(userID, stateName, state) {
const userRef = doc(db, "users", userID);
try {
await setDoc(userRef, { [stateName]: state }, { merge: true });
} catch (err) {
console.error("error adding document", err);
}
}
React.useEffect(() => {
updateFirebaseDocs(userID, 'incomeSources', incomeSources)
},[incomeSources])
this works so long as I don't refresh the page, because upon page refresh, incomeSources defaults back to an empty array on render. Causing firebase docs to become an empty array again which deletes firestore data.
I can't for the life of me figure out the workaround even though I know its probably right in front of me. Can someone point me in the right direction please.
Brief summary: I am able to pull in data from backend and display it, but I need a way to keep the backend database up to date with changes made in the Frontend. And upon refreshing the page, I need the data to persist so that the backend doesn't get reset.
Please advise if more information is needed. First time posting.
I have tried using the above method using useEffects dependency, I have also tried using localstorage to work around this but also don't can't think of a way of implementing it. I feel I am tiptoeing around the solution.
I have a chrome extension that stores data in Firestore and populates that data to the frontend. I always have to refresh the page to see newly added data, which isn’t a user friendly experience. How can I update the UI to show the newly updated data without having to refresh the page?
So far, I've tried using useEffect to get the data. Inside of it, I'm using a function that gets data from Firestore cached inside of chrome local storage.
Here is my code
const getFolderData = () => {
getDataFromChrome("docId").then((res: any) => {
setDocId(res.docId);
});
getDataFromChrome("content").then((res: any) => {
//console.log("getting in mainfolder",res);
// for (const item of res.content) {
// if (item.type.toLowerCase() === "subfolder") {
// // console.log(item)
// getSubFolder(item.id);
// }
// }
for (const item of res.content) {
setTiersContent((pre: any) => [...pre, item]);
}
});
};
useEffect(() => {
getFolderData();
}, []);
I also get this error. I'm also using the chrome extension API to communicate with a background script. It could be related to the problem
Uncaught (in promise) Error: A listener indicated an asynchronous response by returning true, but the message channel closed before a response was received
I've never used firebase so I'm not sure what your functions do, I can only guess. A few things wrong from what I can see:
Your useEffect is set to only run on page load since the dep array is empty, I assume you want to refetch on some condition.
If any of the 2 functions is supposed to be a subscription, your useEffect needs to return a cancel function.
Refetch data when needed is not a new problem, packages like React Query has tools that optimize your requests and refetch when needed. I suggest you give it a shot if your app has more than 2-3 fetch requests.
I am using the ApolloClient core pagination API approach to accumulate paginated requests in a merge function and the repaginate them with a read function: https://www.apollographql.com/docs/react/pagination/core-api
This all works, but now there is a request for each page, even the ones that are already in the cache.
Which defeats the whole purpose when I'm repaginating!
I'm using the default fetchStrategy, cache-first.
If all requested data is present in the cache, that data is returned. Otherwise, Apollo Client executes the query against your GraphQL server and returns that data after caching it.
I wonder how ApolloClient checks that all requested data is in the cache with the pagination implementation.
Because right now (and the docs seems to rely on this) it always does the request, even when the keyArgs match and the data is in the cache.
Does someone know what causes this and how I can customize this cache-first strategy to check if all the items of the requested page are already in the cache?
Here is my code, in case that helps for context or if I'm just doing something wrong:
typePolicies: {
Query: {
fields: {
paginatedProductTracking: {
// Include everything except 'skip' and 'take' to be able to use `fetchMore`
// and repaginate when reading cache
// (essential for switching between desktop pagination and mobile lazy loading
// without having to refetch)
keyArgs: (args) => JSON.stringify(omit(args, ['query.skip', 'query.take'])),
merge: (existing, incoming, { args }) => {
if (!existing) {
return incoming;
}
if (!incoming) {
return existing;
}
const data = existing.paginatedData;
const newData = incoming.paginatedData;
return {
...existing,
// conservative merge that is robust against pages being requested out of order
paginatedData: [
...data.slice(0, args?.query.skip || 0),
...newData,
...data.slice((args?.query.skip || 0) + newData.length),
],
};
},
},
},
},
},
const [pageSize, setPageSize] = useState(100);
const [page, setPage] = useState(0);
const skip = page * pageSize;
const query = {
filter,
aggregationInterval,
order,
skip,
take: pageSize,
search: search ? values : null,
locations: currentLocations.length > 0 ? currentLocations.map((location) => location.id) : undefined,
};
const { data, loading, fetchMore } = useProductTrackingAggregatedDataQuery({
variables: {
query,
},
});
onPageChange={async (newPage) => {
await fetchMore({
variables: {
query: {
...query,
skip: newPage * pageSize,
},
},
});
setPage(newPage);
}}
I was recently faced with the exact same issue and had everything implemented in the way the official documentation illustrates until I stumbled upon this issue which is still open so I'm guessing this is still how the fetchMore function actually behaves to date. So #benjamn says that:
The fetchMore method sends a separate request that always has a fetch policy of no-cache, which is why it doesn't try to read from the cache first.
This being the case, fetchMore is only useful if you are implementing an endless scroll sort of pagination where you know beforehand that the new data is not in the cache.
In the pagination documentation it also states that:
If you are not using React and useQuery, the ObservableQuery object returned by client.watchQuery has a method called setVariables that you can call to update the original variables.
If you change the variables to your query it will trigger your read function implementation. And if the read function finds the data within existing it can return them or return undefined which will in turn trigger a network request to your graphql server to fetch the missing data, which will trigger your merge function to merge the data in the desired way, which will again trigger the read function which will now be able to slice the data you requested according to your { args } out of your existing and return them, which will finally trigger your watched ObservableQuery to fire and your UI to be updated.
Now, this approach is counter intuitive and goes against the "recommended" way of implementing pagination, but contrary to the recommended way this approach actually works.
I was unable to find anything that would prove my conclusions about fetchMore to be wrong, so if any Apollo client guru happens to stumble upon this please do shed some light into this. Until then the only solution I can offer is working with setVariables instead of fetchMore.
Keep in mind that you will need to implement a read function along with your merge. It will be responsible for slicing your cached data and triggering a network request by returning undefined if it was unable to find a full slice.
I have to pretty weird case to handle.
We have to few boxes, We can call some action on every box. When We click the button inside the box, we call some endpoint on the server (using axios). Response from the server return new updated information (about all boxes, not the only one on which we call the action).
Issue:
If user click submit button on many boxes really fast, the request call the endpoints one by one. It's sometimes causes errors, because it's calculated on the server in the wrong order (status of group of boxes depends of single box status). I know it's maybe more backend issue, but I have to try fix this on frontend.
Proposal fix:
In my opinion in this case the easiest fix is disable every submit button if any request in progress. This solution unfortunately is very slow, head of the project rejected this proposition.
What we want to goal:
In some way We want to queue the requests without disable every button. Perfect solution for me at this moment:
click first button - call endpoint, request pending on the server.
click second button - button show spinner/loading information without calling endpoint.
server get us response for the first click, only then we really call the second request.
I think something like this is huge antipattern, but I don't set the rules. ;)
I was reading about e.g. redux-observable, but if I don't have to I don't want to use other middleware for redux (now We use redux-thunk). Redux-saga it will be ok, but unfortunately I don't know this tool. I prepare simple codesandbox example (I added timeouts in redux actions for easier testing).
I have only one stupid proposal solution. Creating a array of data needs to send correct request, and inside useEffect checking if the array length is equal to 1. Something like this:
const App = ({ boxActions, inProgress, ended }) => {
const [queue, setQueue] = useState([]);
const handleSubmit = async () => { // this code do not work correctly, only show my what I was thinking about
if (queue.length === 1) {
const [data] = queue;
await boxActions.submit(data.id, data.timeout);
setQueue(queue.filter((item) => item.id !== data.id));
};
useEffect(() => {
handleSubmit();
}, [queue])
return (
<>
<div>
{config.map((item) => (
<Box
key={item.id}
id={item.id}
timeout={item.timeout}
handleSubmit={(id, timeout) => setQueue([...queue, {id, timeout}])}
inProgress={inProgress.includes(item.id)}
ended={ended.includes(item.id)}
/>
))}
</div>
</>
);
};
Any ideas?
I agree with your assessment that we ultimately need to make changes on the backend. Any user can mess with the frontend and submit requests in any order they want regardless how you organize it.
I get it though, you're looking to design the happy path on the frontend such that it works with the backend as it is currently.
It's hard to tell without knowing the use-case exactly, but there may generally be some improvements we can make from a UX perspective that will apply whether we make fixes on the backend or not.
Is there an endpoint to send multiple updates to? If so, we could debounce our network call to submit only when there is a delay in user activity.
Does the user need to be aware of order of selection and the impacts thereof? If so, it sounds like we'll need to update frontend to convey this information, which may then expose a natural solution to the situation.
It's fairly simple to create a request queue and execute them serially, but it seems potentially fraught with new challenges.
E.g. If a user clicks 5 checkboxes, and order matters, a failed execution of the second update would mean we would need to stop any further execution of boxes 3 through 5 until update 2 could be completed. We'll also need to figure out how we'll handle timeouts, retries, and backoff. There is some complexity as to how we want to convey all this to the end user.
Let's say we're completely set on going that route, however. In that case, your use of Redux for state management isn't terribly important, nor is the library you use for sending your requests.
As you suggested, we'll just create an in-memory queue of updates to be made and dequeue serially. Each time a user makes an update to a box, we'll push to that queue and attempt to send updates. Our processEvents function will retain state as to whether a request is in motion or not, which it will use to decide whether to take action or not.
Each time a user clicks a box, the event is added to the queue, and we attempt processing. If processing is already ongoing or we have no events to process, we don't take any action. Each time a processing round finishes, we check for further events to process. You'll likely want to hook into this cycle with Redux and fire new actions to indicate event success and update the state and UI for each event processed and so on. It's possible one of the libraries you use offer some feature like this as well.
// Get a better Queue implementation if queue size may get high.
class Queue {
_store = [];
enqueue = (task) => this._store.push(task);
dequeue = () => this._store.shift();
length = () => this._store.length;
}
export const createSerialProcessor = (asyncProcessingCallback) => {
const updateQueue = new Queue();
const addEvent = (params, callback) => {
updateQueue.enqueue([params, callback]);
};
const processEvents = (() => {
let isReady = true;
return async () => {
if (isReady && updateQueue.length() > 0) {
const [params, callback] = updateQueue.dequeue();
isReady = false;
await asyncProcessingCallback(params, callback); // retries and all that include
isReady = true;
processEvents();
}
};
})();
return {
process: (params, callback) => {
addEvent(params, callback);
processEvents();
}
};
};
Hope this helps.
Edit: I just noticed you included a codesandbox, which is very helpful. I've created a copy of your sandbox with updates made to achieve your end and integrate it with your Redux setup. There are some obvious shortcuts still being taken, like the Queue class, but it should be about what you're looking for: https://codesandbox.io/s/dank-feather-hqtf7?file=/src/lib/createSerialProcessor.js
In case you would like to use redux-saga, you can use the actionChannel effect in combination with the blocking call effect to achieve your goal:
Working fork:
https://codesandbox.io/s/hoh8n
Here is the code for boxSagas.js:
import {actionChannel, call, delay, put, take} from 'redux-saga/effects';
// import axios from 'axios';
import {submitSuccess, submitFailure} from '../actions/boxActions';
import {SUBMIT_REQUEST} from '../types/boxTypes';
function* requestSaga(action) {
try {
// const result = yield axios.get(`https://jsonplaceholder.typicode.com/todos`);
yield delay(action.payload.timeout);
yield put(submitSuccess(action.payload.id));
} catch (error) {
yield put(submitFailure());
}
}
export default function* boxSaga() {
const requestChannel = yield actionChannel(SUBMIT_REQUEST); // buffers incoming requests
while (true) {
const action = yield take(requestChannel); // takes a request from queue or waits for one to be added
yield call(requestSaga, action); // starts request saga and _waits_ until it is done
}
}
I am using the fact that the box reducer handles the SUBMIT_REQUEST actions immediately (and sets given id as pending), while the actionChannel+call handle them sequentially and so the actions trigger only one http request at a time.
More on action channels here:
https://redux-saga.js.org/docs/advanced/Channels/#using-the-actionchannel-effect
Just store the promise from a previous request and wait for it to resolve before initiating the next request. The example below uses a global variable for simplicity - but you can use smth else to preserve state across requests (e.g. extraArgument from thunk middleware).
// boxActions.ts
let submitCall = Promise.resolve();
export const submit = (id, timeout) => async (dispatch) => {
dispatch(submitRequest(id));
submitCall = submitCall.then(() => axios.get(`https://jsonplaceholder.typicode.com/todos`))
try {
await submitCall;
setTimeout(() => {
return dispatch(submitSuccess(id));
}, timeout);
} catch (error) {
return dispatch(submitFailure());
}
};
Is there a way to modify query response data before it is saved in the internal cache?
I'm using apollo hooks, but this question is relevant to any of front-end approaches using apollo client (HOC & Components as well).
const { data, updateQuery } = useQuery(QUERY, {
onBeforeDataGoesToCache: originalResponseData => {
// modify data before it is cached? Can I have something like this?
return modifiedData;
}
});
Obviously onBeforeDataGoesToCache does not exist, but that's exactly the behavior I'm looking for. There's an updateQuery function in the result, which basically does what is needed, but in the wrong time. I'm looking for something to work as a hook or a middleware inside the query mutation.
It sounds like you want Afterware which, much like Middleware that allows operations before the request is made, allows you to manipulate data in the response e.g.
const modifyDataLink = new ApolloLink((operation, forward) => {
return forward(operation).map(response => {
// Modify response.data...
return response;
});
});
// use with apollo-client
const link = modifyDataLink.concat(httpLink);