RTK Query get state from another slice using getState() - reactjs

I just started on redux yesterday and after reading up on the different libraries, I decided to use the slice route from RTK.
For my async, instead of using createAsyncThunk, I decided to use RTK query and I have a question on the right way to access state from another slice.
slice1 contains some user data for example:
export const initialState: IUserState = {
name: 'example',
id: null,
};
and in my slice2, I have a function that wants to do something like getSomethingByUserId(id) and my current implementation:
interface IApiResponse {
success: true;
result: IGotSomethingData[];
}
const getsomethingSlice: any = createApi({
reducerPath: 'api',
baseQuery: fetchBaseQuery({
baseUrl: 'https://someapibase',
}),
endpoints(builder) {
return {
fetchAccountAssetsById: builder.query<IApiResponse, null>({
query() {
console.log('store can be called here', store.getState().user.id);
return `/apipath?id=${store.getState().user.id}`;
},
}),
};
},
});
export default getsomethingSlice;
export const { useFetchAccountAssetsByIdQuery } = getsomethingSlice;
As I read somewhere that markerikson mentioned it's not good practice to import the store but to use getState in thunk, I took a look around and see in the documentations that there is getState for query which exist in the onStart unlike thunk which you can access it from it's second parameter.
Do anyone have a onStart implementation for this? Or is importing store acceptable for this?

Generally, we want to prevent people from doing that, which is why you don't have getStore available there (you have at many other places).
You see, RTK-query uses the argument you give into the query to determine the cache key.
Since you don't pass in an argument, the cache key the result would be stored as fetchAccountAssetsById(undefined).
So, you make a first request, state.user.id is 5 and that request is made.
Now, your state.user.id changes to 6. But your component calls useFetchAccountAssetsByIdQuery() and there is already a cache entry for fetchAccountAssetsById(undefined), so that is still being used - and no request is made.
If your component instead would be calling useFetchAccountAssetsByIdQuery(5) and it changes to useFetchAccountAssetsByIdQuery(6), RTK-query can safely identify that it has a cache entry for fetchAccountAssetsById(5), but not for fetchAccountAssetsById(6) and will make a new request, retrieving up-to-date information.
So, you should select that value in your component using useSelector and pass it into your query hook as an argument, not pull it out of the store in your query function.

Related

How to merge between RTK query and redux toolkit

I have a redux slice called pendingPost where I add some field to it like car_mileage using my reducers functions and save all this inside my pendingPost slice. Then submit using the data inside the pendingPost reducer
const pendingPostReducer = createSlice({
name: 'pendingPost',
initialState,
reducers: {
...
addPropertyToPendingPost: (state, action) => {
state.savedData = { ...state.savedData, ...action.payload };
},
Also I have postsAPI where I use rtk query to get All Posts, user Posts, ...
export const postsApi = createApi({
baseQuery: fetchBaseQuery({
baseUrl: API_URL,
}),
tagTypes: ['Post'],
endpoints: (build) => ({
getPosts: build.query({
query: (body) => ({
url: `post/filter`,
method: 'POST',
body: body,
}),
providesTags: (result) =>
result
? [
...result.data.map(({ id }) => ({ type: 'Post', id })),
{ type: 'Post', id: 'LIST' },
]
: [{ type: 'Post', id: 'LIST' }],
What I want to do is combine both of these where when I create post I want to do mutation and invalidate. How can I achieve this ?
I tried to search for a way to add save some fields inside RTK query but didn't find a way to achieve that, I guess rtk query is used only for caching and queriess
Your question has two parts.
I think you have a post scenario and you want to add another post to the list and update the posts.
for the first part
I assume you store the posts inside of postPending.(if this slice reducer is for the post so postPending is not a good name for this slice you should name it postSlice and inside of it do everything about the post) and show list of post on a page based on post pending.
in this case, you should go for createAsyncThunk instead of the RTK query. because as you guessed the RTK query purpose is caching the queries.
I don't know this will help you or not but you can dispatch RTK query outside of ApiSlices like so:
dispatch(ApiSlice.endpoints.getPosts.initiate())
for the second part:
I create an example for you in here.
basically, you need to create ApiSlice using RTK query which handles get post. so you have to follow these steps:
1- create API slice
2- create GET query endpoint for fetching the list.
3- use tagTypes to tell RTK query I have these tags.
4- use providesTags for each endpoint you create to tell RTK query I have these variants of tags for this endpoint.
5- when you want to create a POST, PUT, PATCH, or DELETE request to the server you literally case a change to the available list so you need mutations instead of query.
6- in mutations, you will use invalidatesTags to tell RTK query got and find in tags I've already provide for you and remove them if they have the same identity as the tags I invalidate in invalidatesTags.
7- invalidating tags makes the RTK query find out it must re-fetch the query and update the cache.
and you do not need to store posts somewhere else. as you can see I use the query hook in 2 different components and I only make a request once.
since the RTK query knows how many subscriptions you have to the same hook and if cache data is available for that query it will return it to the hook and will not create another request. in other words RTK query Hook will play a role as your postPending slice so don't have to store the data in two places.

RTKQuery polling with args from store/state

I am trying to collect data every 60 seconds, the data I want to collect is keyed by time stamp. Since I only want to collect the data since the last request I am using timestamps as URLSearch params:
query: (args: { id: string; lastFetch: string }) => {
return {
url: `/dashboard/${args.id}/data?start=${
args.lastFetch
}&end=${(new Date()).toJSON()}`,
};
},
Where to begin the 'lastfetch' value is "1899-12-31T00:00:00.000Z" to get all data before this time.
Once data has been received the last fetch field (Which is stored in the Redux Store) is updated to the end time:
dispatch(updateLastFetch({lastFetch: (new Date()).toJSON(), id: args.id}));
The time updates in the store as expected. When the polling interval has elapsed the same arg (originally "1899-12-31T00:00:00.000Z") is used and not the newly dispatched value.
The useQuery has the arg as a value from state that I am updating from the useAppSelector hook from where the lastFetch is being updated. This state value updates and prints to the console.
But this also reinitializes the request causing it to happen again.
Is there a way I can access the store directly from the RTKQuery builder to get the lastFetch value to put directly into the params without having to pass it into the UseQuery?
If not what other way is there to do this type of fetching?
Thank you.
I tried adding a skip value if the start and end date were the same (Which would happen after the first request as there was a re-render with the new 'lastFetch' value) but this then stopped the polling.
Is there a way I can access the store directly from the RTKQuery builder to get the lastFetch value to put directly into the params without having to pass it into the UseQuery?
Yes, if you use a queryFn you will have access to getState through the arguments of the query function. Specifically, the second argument api.
const myApi = createApi({
baseQuery: fetchBaseQuery({ baseUrl: '/' }),
endpoints: (build) => ({
getData: build.query<Data, string>({
// The args now can be just the string id instead of an object.
queryFn: (args, api, extraOptions, baseQuery) => {
// Select the last fetch timestamp from your state.
const lastFetch = selectLastFetch(api.getState());
// Call the base query like you did before.
return baseQuery({
url: `/dashboard/${args}/data?start=${
lastFetch
}&end=${(new Date()).toJSON()}`,
});
},
}),
}),
})

How to clear & invalidate cache data using RTK Query?

I was facing a problem for sometime, that was I'm unable to clear cache using RTK query.
I tried in various ways but cache data is not clear.
I used invalidatesTag in my mutation query and it called the api instantly. But in this case I want to refetch multiple api again, but not from any rtk query or mutation. I want to make the api call after some user activity like click.
How can I solve this problem?
I made a separate function where I return api.util.invalidateTags(tag) or api.util.resetApiState().
this is my code-snipet:-
` const api = createApi({.....})
export const resetRtkCache = (tag?: String[]) => {
const api =
if (tag) {
return api.util.invalidateTags(tag)
} else {
return api.util.resetApiState()
}
}`
& I called it using dispatch method from other files
`const reloadData = () => {
dispatch(resetRtkCache())
}`
but here cache data is not removed.I think dispatch funtion is not working. I don't see the api call is being sent to server in the browser network.
But in this case I want to refetch multiple api again, but not from
any rtk query or mutation. I want to make the api call after some user
activity like click. How can I solve this problem?
So if I understood correctly what you want to achieve is to fetch some api that you have in RTK only after some kind of user interaction?
Can't you just define something like this?
const { data } = useGetYourQuery({ skip: skipUntilUserInteraction })
Where skipUntilUserInteraction is a component state variable that you will set to true and update to false based on the user interaction you need? (e.g. a click of a button).
So essentially on component render that specific endpoint will be skipped but will be fetched after the interaction that you want will happen?
wow, you actually asking so many questions at once. but I think you should definitely read the documentation because it covers all the questions you have.
so trying to answer your questions one by one.
I used invalidatesTag in my mutation query and it called the api instantly.
invalidating with Tags is one of the ways to clear the cache.
you should first set the tagTypes for your API then use those tags in mutation queries and tell the RTK query which part of entities you want to clear.
I want to refetch multiple APIs again
you can customize the query inside of a mutation or query like this example and by calling one function query you can send multiple requests at once and if you want to fetch the API again after the cache removed you do not need to do anything because RTK query will do it for you.
I want to make the API call after some user activity like click
every mutation gives u a function that you can pass to onClick like below:
import { use[Mymutation]Mutation } from 'features/api';
const MyComponenet() {
const [myMutationFunc, { isLoading, ...}] = use[Mymutation]Mutation();
return <button type='button' onClick={myMutationFunc}>Click for call mutaion</button>
}
and remember if you set providesTags for your endpoint which you were defined in tagTypes by clicking on the button and firing up the myMutationFunc you will be clearing the cache with those tags.
and if you looking for an optimistic update for the cache you can find your answer in here.
async onQueryStarted({ id, ...patch }, { dispatch, queryFulfilled }) {
const patchResult = dispatch(
api.util.updateQueryData('getPost', id, (draft) => {
Object.assign(draft, patch)
})
)
try {
await queryFulfilled
} catch {
patchResult.undo()
}
}

Can you override the cache key for a single endpoint in RTK Query?

In our API, we unfortunately have a ridiculous endpoint which requires a huge data object to be sent to it; something like:
const requestData: RidiculousEndpointRequestData = {
property1: ...,
property2: ...,
// ----- 8< -----
property100: ...
}
But in actual fact, the only properties that the API actually cares about are (say) property1 and property2. So, for this endpoint, it would make more sense if we could make our cache key:
endpointName(property1='abc',property2='def')
Ideally, it would be good to fix the API so that it actually only required us to pass these two parameters, but unfortunately this option is not available to us.
I know that RTK Query lets us override the cache key serializer for all endpoints using the serializeQueryArgs parameter; so we could do something like this:
const serializeQueryArgs = (args) => {
const { endpointName, queryArgs } = args;
if (endpointName === 'ridiculousEndpoint') {
return `${endpointName}(property1=${args.property1},property2=${args.property2})`;
}
// I can't do this because defaultSerializeQueryArgs isn't exported :'(
return defaultSerializeQueryArgs(args);
};
export const dumbApi = createApi({
reducerPath: 'our-dumb-api',
baseQuery: fetchBaseQuery({ baseUrl: 'some-value' }),
serializeQueryArgs,
endpoints: (build) => ({
sensibleEndpoint: build.query<SensibleEndpointResponse, void>({
// ...
}),
ridiculousEndpoint: build.query<RidiculousEndpointResponse, RidiculousEndpointRequestData>({
// ...
})
}),
However this is less than ideal for a few reason - not the least of which is that the default serializer (which works fine for all our other endpoints) is not exported from RTK Query.
Another option we thought of was to split the API into two APIs, one that has our custom serializer, and one that doesn't - but this seems like overkill and it feels like it goes against the design principles of RTK Query.
Is there a better way to customize the cache keys for a single endpoint?

Correct way to remove item from client side cache in apollo-client

I am using GraphQL with Apollo-Client in my React(Typescript) application with an in memory cache. The cache is updated on new items being added which works fine with no errors.
When items are removed a string is returned from GraphQL Apollo-Server backend stating the successful delete operation which initiates the update function to be called which reads the cache and then modifies it by filtering out the id of the item. This is performed using the mutation hook from Apollo-Client.
const [deleteBook] = useMutation<{ deleteBook: string }, DeleteBookProps>(DELETE_BOOK_MUTATION, {
variables: { id },
onError(error) {
console.log(error);
},
update(proxy) {
const bookCache = proxy.readQuery<{ getBooks: IBook[] }>({ query: GET_BOOKS_QUERY });
if (bookCache) {
proxy.writeQuery<IGetBooks>({
query: GET_BOOKS_QUERY,
data: { getBooks: bookCache.getBooks.filter((b) => b._id !== id) },
});
}
},
});
The function works and the frontend is updated with the correct items in cache, however the following error is displayed in the console:
Cache data may be lost when replacing the getBooks field of a Query object.
To address this problem (which is not a bug in Apollo Client), define a custom merge function for the Query.getBooks field, so InMemoryCache can safely merge these objects:
existing: [{"__ref":"Book:5f21280332de1d304485ae80"},{"__ref":"Book:5f212a1332de1d304485ae81"},{"__ref":"Book:5f212a6732de1d304485ae82"},{"__ref":"Book:5f212a9232de1d304485ae83"},{"__ref":"Book:5f21364832de1d304485ae84"},{"__ref":"Book:5f214e1932de1d304485ae85"},{"__ref":"Book:5f21595a32de1d304485ae88"},{"__ref":"Book:5f2166601f6a633ae482bae4"}]
incoming: [{"__ref":"Book:5f212a1332de1d304485ae81"},{"__ref":"Book:5f212a6732de1d304485ae82"},{"__ref":"Book:5f212a9232de1d304485ae83"},{"__ref":"Book:5f21364832de1d304485ae84"},{"__ref":"Book:5f214e1932de1d304485ae85"},{"__ref":"Book:5f21595a32de1d304485ae88"},{"__ref":"Book:5f2166601f6a633ae482bae4"}]
For more information about these options, please refer to the documentation:
* Ensuring entity objects have IDs: https://go.apollo.dev/c/generating-unique-identifiers
* Defining custom merge functions: https://go.apollo.dev/c/merging-non-normalized-objects
Is there a better way to update the cache so this error won't be received?
I too faced the exact same warning, and unfortunately didn't come up with a solution other than the one suggested here: https://go.apollo.dev/c/merging-non-normalized-objects
const client = new ApolloClient({
....
cache: new InMemoryCache({
typePolicies: {
Query: {
fields: {
getBooks: {
merge(existing, incoming) {
return incoming;
},
},
},
},
}
}),
});
(I am not sure weather I wrote your fields and types correctly though, so you might change this code a bit)
Basically, the code above let's apollo client how to deal with mergeable data. In this case, I simply replace the old data with a new one.
I wonder though, if there's a better solution
I've also faced the same problem. I've come across a GitHub thread that offers two alternative solutions here.
The first is evicting what's in your cache before calling cache.writeQuery:
cache.evict({
// Often cache.evict will take an options.id property, but that's not necessary
// when evicting from the ROOT_QUERY object, as we're doing here.
fieldName: "notifications",
// No need to trigger a broadcast here, since writeQuery will take care of that.
broadcast: false,
});
In short this flushes your cache so your new data will be the new source of truth. There is no concern about losing your old data.
An alternative suggestion for the apollo-client v3 is posted further below in the same thread:
cache.modify({
fields: {
notifications(list, { readField }) {
return list.filter((n) => readField('id', n) !==id)
},
},
})
This way removes a lot of boilerplate so you don't need to use readQuery, evict, and writeQuery. The problem is that if you're running Typescript you'll run into some implementation issues. Under-the-hood the format used is InMemoryCache format instead of the usual GraphQL data. You'll be seeing Reference objects, types that aren't inferred, and other weird things.

Resources