cache.writeFragment method in graphQL apollo library doesn't work - reactjs

I'm trying to add a property to an object already in cache.
I have added local resolver and in that i'm doing this.
cache.writeFragment({
id: gid.toString(),
fragment: gql`
fragment queues on Group {
queuesList
}
`,
data: {
queuesList: ["test"],
__typename: "Group"
}
});
This writes an object to cache and does not add a property to object of given Id.
I don't understand where does fragment fail.

Okay, I was doing it wrong.
I was giving fragment wrong id, I had to pass the object keys in cached data.
That is. id: Group:${gid.toString()}
https://www.apollographql.com/docs/react/caching/cache-configuration/#generating-unique-identifiers
Hope this helps anyone in same situation.

Related

Apollo Client readFragment with custom id (keyFields)

For ref, using "#apollo/client": "^3.5.5",
I've defined my typePolicies like so as suggested in docs:
HistoricalData: {
keyFields: ["variable", "workspace"],
fields:{...}
}
and when my cache is built, I am expecting my cacheId to be like
<__typename>:<id>:<id>
HistoricalData:${props.variable}:${props.workspace}`;
but instead, when I look in the Apollo cache, it's been created using the keyField names and the values in an object, such as
HistoricalData:{"variable":"GAS.TOTAL","workspace":"ABC"}
instead of
HistoricalData:GAS.TOTAL:ABC
so when I try to readFragment it returns null
client.readFragment({
id: `HistoricalData:${props.variable}:${props.workspace}`,
fragment: apolloGQL`fragment MyHistorical on Historical {
variable
workspace
}`})
It does actually return a value from the cache if I create my id in the structure that exists in the cache and readFragment using this.
Has anyone else noticed that Apollo client is not creating the cache id's in the structure that they describe in the docs?
After some research I came upon the correct way to handle this case. I know that you have already moved on, but just in case anyone else has the same problem in the future, here goes:
As described in the documentation for customizing the cache ID, the cache ID will be an stringified object, as you pointed out. It's not quite explicit in the documentation, but at this point in time it provides this nested example for a cache ID:
Book:{"title":"Fahrenheit 451","author":{"name":"Ray Bradbury"}}
But as users we don't have to preoccupy us with the format of this ID, because there's a helper for that, called cache.identify.
For your specific case, you could use something like this:
const identifiedId = cache.identify({
__typename: 'HistoricalData',
variable: 'GAS.TOTAL',
workspace: 'ABC'
});
cache.readFragment({
id: identifiedId,
fragment: apolloGQL`fragment MyHistorical on Historical {
variable
workspace
}`
});

How to solve "N+1" problem in Keystone.js

It seems, that Keystone.js does't provides a solution of "N+1" problem.
Maybe, they are some plugins for that?
You can try to check https://www.keystonejs.com/guides/cache-hints/
This is a typical pattern for solving this problem.
You can try:
const app = new GraphQLApp({
apollo: {
cacheControl: {
defaultMaxAge: 3600,
},
},
});
to cache all resolver results.
What version are you on? The current version of KeystoneJS (Keystone Next) is build on top of Prisma which should be building fairly performant DB queries. If there's a specific GraphQL query you're preforming that's resulting in a suboptimal SQL it may represent a bug in either Keystone or Prisma code, in which case it'd be great if you could isolate the problem and log an issue.
If you're adding hooks, access control or virtual fields that query the DB it is possible to encounter the N+1 problem as those functions can be called for each item returned in a query. For example, this code, taken from the virtual-fields example causes N+1 queries if used as written:
Post: list({
fields: {
// [... various fields ...]
author: relationship({ ref: 'Author.posts', many: false }),
// A virtual field which uses `item` and `context` to query data.
authorName: virtual({
field: schema.field({
type: schema.String,
async resolve(item, args, context) {
const { author } = await context.lists.Post.findOne({
where: { id: item.id },
query: 'author { name }',
});
return author && author.name;
},
}),
}),
},
}),
Here, the resolver function for the authorName field will be called for each item loaded (assuming the field is queried). In these cases I'd suggest using something like the GraphQL dataloader (or similar) over the top of the Keystone CRUD API. If used correctly, dataloader can combine multiple queries and resolve the N+1 behaviour.

Drupal GraphQL logic for Resolver for a field with multiple values

My first Drupal headless project with GraphQL and I am struggling with the logic behind the resolvers.
There is a content type "project" with a field "field_project_description". The field can store multiple values.
This is part of my schema:
type Project {
id: Int!
project_title: String!
project_description: [ProjectDescription]
}
type ProjectDescription {
value: String
}
And this is how the one part of the corresponding resolver looks:
$registry->addFieldResolver('ProjectDescription', 'value',
$builder->produce('property_path')
->map('type', $builder->fromValue('entity:node'))
->map('value', $builder->fromParent())
->map('path', $builder->fromValue('field_project_project_desc.value'))
);
But as far as I understand there has to be another resolver like
$registry->addFieldResolver('Project', 'project_description',
And I can't figure out, what this resolver has to look like.
Okay, I solved the problem myself. The answer is very simple actually.
You don't need a second resolver. Just one resolver with an additional PHP function, which flattens the arrays to the strings is enough. This is what the code looks like now.
The schema:
type Project {
id: Int!
project_title: String!
project_description: [String]
}
And the resolver:
$registry->addFieldResolver('Project', 'project_description',
$builder->compose(
$builder->produce('property_path')
->map('type', $builder->fromValue('entity:node'))
->map('value', $builder->fromParent())
->map('path', $builder->fromValue('field_project_project_desc')),
$builder->callback(function ($entity) {
$list = [];
foreach($entity as $item){
array_push($list, $item['value']);
}
return $list;
})
)
);
I hope I can help someone else with this.

Meteor inserting into a collection schema with array elements

Hi I created a SimpleSchema for a Mongo collection which has a variable number of sub-documents called measurables. Unfortunately it's been a while since I've done this and I can't remember how to insert into this type of schema! Can someone help me out?
The schema is as follows:
const ExerciseTemplates = new Mongo.Collection('ExerciseTemplates');
const ExerciseTemplateSchema = new SimpleSchema({
name: {
type: String,
label: 'name',
},
description: {
type: String,
label: 'description',
},
createdAt: {
type: Date,
label: 'date',
},
measurables: {
type: Array,
minCount: 1,
},
'measurables.$': Object,
'measurables.$.name': String,
'measurables.$.unit': String,
});
ExerciseTemplates.attachSchema(ExerciseTemplateSchema);
The method is:
Meteor.methods({
addNewExerciseTemplate(name, description, measurables) {
ExerciseTemplates.insert({
name,
description,
createdAt: new Date(),
measurables,
});
},
});
The data sent by my form for measurables is an array of objects.
The SimpleSchema docs seem to be out of date. If I use the example they show with measurables: type: [Object] for an array of objects. I get an error that the the type can't be an array and I should set it to Array.
Any suggestions would be awesome!!
Many thanks in advance!
edit:
The measurable variable contains the following data:
[{name: weight, unit: kg}]
With the schema above I get no error at all, it is silent as if it was successful, but when I check the db via CLI I have no collections. Am I doing something really stupid? When I create a new meteor app, it creates a Mongo db for me I assume - I'm not forgetting to actually create a db or something dumb?
Turns out I was stupid. The schema I posted was correct and works exactly as intended. The problem was that I defined my schema and method in a file in my imports directory, outside both client and server directories. This methods file was imported into the file with the form that calls the method, and therefore available on the client, but not imported into the server.
I guess that the method was being called on the client as a stub so I saw the console.log firing, but the method was not being called on the server therefore not hitting the db.
Good lesson for me regarding the new recommended file structure. Always import server side code in server/main.js!!! :D
Thanks for your help, thought I was going to go mad!

Relay mutation. FatQuery. Ask all fields in REQUIRED_CHILDREN

My question is: i have a mutations config where i have a REQUIRE_CHILDREN config with children array of queries. How can i get all possible fields from a payload object?
{
type: 'REQUIRED_CHILDREN',
children: [
Relay.QL`
fragment on MyPayload {
me {
id
...others field
}
}`]
So how can i ask all possible fields from me object? If i point only fragment on MePayload { me } object relay still returns me me { id }. I want relay to return me all fields in me object. Thanks.
You can't - your client code needs to specify all the fields you want to fetch explicitly. Those fields are then statically validated by the babel-relay-plugin, etc.
You probably don't want to be using REQUIRED_CHILDREN either, by the way. That's only useful to fetch fields that are only accessible in the onSuccess callback of the mutation, and therefore are never written to the Relay store and accessible to Relay containers...

Resources