Using Apollo Client 3 and eed to remove all the results of a query regardless of the arguments. Tried like this;
cache.evict({
id: "ROOT_QUERY",
fieldName: "countries"
});
cache.gc();
However nothing removed from cache, __APOLLO_CLIENT__.cache.data still has all the result in the cache.
I can remove a single Country object on the other hand.
You can see it here sandbox
Simple adding broadcast: false parameter solved the problem;
cache.evict({
id: 'ROOT_QUERY',
fieldName: 'countries',
broadcast: false,
});
cache.gc();
Related
It seems, that Keystone.js does't provides a solution of "N+1" problem.
Maybe, they are some plugins for that?
You can try to check https://www.keystonejs.com/guides/cache-hints/
This is a typical pattern for solving this problem.
You can try:
const app = new GraphQLApp({
apollo: {
cacheControl: {
defaultMaxAge: 3600,
},
},
});
to cache all resolver results.
What version are you on? The current version of KeystoneJS (Keystone Next) is build on top of Prisma which should be building fairly performant DB queries. If there's a specific GraphQL query you're preforming that's resulting in a suboptimal SQL it may represent a bug in either Keystone or Prisma code, in which case it'd be great if you could isolate the problem and log an issue.
If you're adding hooks, access control or virtual fields that query the DB it is possible to encounter the N+1 problem as those functions can be called for each item returned in a query. For example, this code, taken from the virtual-fields example causes N+1 queries if used as written:
Post: list({
fields: {
// [... various fields ...]
author: relationship({ ref: 'Author.posts', many: false }),
// A virtual field which uses `item` and `context` to query data.
authorName: virtual({
field: schema.field({
type: schema.String,
async resolve(item, args, context) {
const { author } = await context.lists.Post.findOne({
where: { id: item.id },
query: 'author { name }',
});
return author && author.name;
},
}),
}),
},
}),
Here, the resolver function for the authorName field will be called for each item loaded (assuming the field is queried). In these cases I'd suggest using something like the GraphQL dataloader (or similar) over the top of the Keystone CRUD API. If used correctly, dataloader can combine multiple queries and resolve the N+1 behaviour.
Hi I created a SimpleSchema for a Mongo collection which has a variable number of sub-documents called measurables. Unfortunately it's been a while since I've done this and I can't remember how to insert into this type of schema! Can someone help me out?
The schema is as follows:
const ExerciseTemplates = new Mongo.Collection('ExerciseTemplates');
const ExerciseTemplateSchema = new SimpleSchema({
name: {
type: String,
label: 'name',
},
description: {
type: String,
label: 'description',
},
createdAt: {
type: Date,
label: 'date',
},
measurables: {
type: Array,
minCount: 1,
},
'measurables.$': Object,
'measurables.$.name': String,
'measurables.$.unit': String,
});
ExerciseTemplates.attachSchema(ExerciseTemplateSchema);
The method is:
Meteor.methods({
addNewExerciseTemplate(name, description, measurables) {
ExerciseTemplates.insert({
name,
description,
createdAt: new Date(),
measurables,
});
},
});
The data sent by my form for measurables is an array of objects.
The SimpleSchema docs seem to be out of date. If I use the example they show with measurables: type: [Object] for an array of objects. I get an error that the the type can't be an array and I should set it to Array.
Any suggestions would be awesome!!
Many thanks in advance!
edit:
The measurable variable contains the following data:
[{name: weight, unit: kg}]
With the schema above I get no error at all, it is silent as if it was successful, but when I check the db via CLI I have no collections. Am I doing something really stupid? When I create a new meteor app, it creates a Mongo db for me I assume - I'm not forgetting to actually create a db or something dumb?
Turns out I was stupid. The schema I posted was correct and works exactly as intended. The problem was that I defined my schema and method in a file in my imports directory, outside both client and server directories. This methods file was imported into the file with the form that calls the method, and therefore available on the client, but not imported into the server.
I guess that the method was being called on the client as a stub so I saw the console.log firing, but the method was not being called on the server therefore not hitting the db.
Good lesson for me regarding the new recommended file structure. Always import server side code in server/main.js!!! :D
Thanks for your help, thought I was going to go mad!
ExtJs does not expect the same response from the server to confirm updating rows than ExtJs 4.
I have a table with a string id:
Ext.define('App.Product', {
extend: 'Ext.data.Model',
fields: [
{name: 'productid', type: 'string'},
{name: 'ord', , type: 'int'},
(...)
],
idProperty: 'nrproduit'
})
Upon saving the changes, the ExtJs client sends the modified data to the server:
[{"ord":1,"productid":"SG30301"},{"ord":3,"productid":"SG30100"}]
In ExtJs 4.2, it expected the server to send the full data of the two products back, like this:
{
"success":true,
"data":[{
"nrproduit":"SG30100",
"ord":3,
"author":"...",
"editor":"...",
(...)
},{
"nrproduit":"SG30301",
"ord":3,
"author":"...",
"editor":"...",
(...)
}]
}
In ExtJs 6.2, this no longer works. I get the error
Uncaught Error: Duplicate newKey "SG30100" for item with oldKey "SG30301"
Apparently, the client does not take into account the idProperty, but seems to expect the order of the row to be the same in the response as in the request.
Is there a way to force the client to take into account the ids sent back from the server ? Or is it necessary to change the server code ? Is there somewhere documentation on what exactly changed between ExtJs 4.2 and 6.2 in respect to data synchronization between client and server, that go into these details ?
ExtJS considers the order because ids can change, e.g. during insert operations (if the id is generated server-side). To allow for that, in the general case, ExtJS expects to receive the results from the server in the same order in which the records were sent.
However, there's more to it. Under certain circumstances, it uses the id, not the order. You can read Operation.doProcess to find how ExtJS does what it does, and possibly override it if you require a different behaviour.
Edit: It uses the id when the model has the property clientIdProperty, else it uses the order. So, it is enough to add it like this:
Ext.define('App.Product', {
extend: 'Ext.data.Model',
fields: [
{name: 'productid', type: 'string'},
{name: 'ord', , type: 'int'},
(...)
],
idProperty: 'nrproduit',
clientIdProperty: 'nrproduit'
})
Another alternative solution, if you don't want to change the server side code to handle the clientIdProperty property is to disable the batch mode (with batchActions: false) and all your requests are handled one by one.
This is prevent the error "extjs Ext.util.Collection.updateKey(): Duplicate newKey for item with oldKey". With his approach, you will loose some efficiency.
You have to add this to your model:
...
proxy: {
type: 'direct',
extraParams: {
defaultTable: '...',
defaultSortColumn: '...',
defaultSordDirection: 'ASC'
},
batchActions: false, // avoid clientIdProperty
api: {
read: 'Server.Util.read',
create: 'Server.Util.create',
update: 'Server.Util.update',
destroy: 'Server.Util.destroy'
},
reader: {
Just adding clientIdProperty on Model definition solved the issue.
Some more info. Same problem has been asked on sencha forum but solution is not mentioned there. Here is the link to that discussion-
https://www.sencha.com/forum/showthread.php?301898-Duplicate-newKey-quot-x-quot-for-item-with-oldKey-quot-xx-quot
I need to use mongoose with dbref but I don't know which design is better for me.
First design:
var user = mongoose.Schema({
name: 'string'
});
var eventSchema = mongoose.Schema({
title: 'string',
propietary_id: 'String',
comments : [{
text: 'string',
user: { type : mongoose.Schema.Types.ObjectId, ref : 'users' },
createdAt: {type: Date, default: Date.now }
}]
});
Second design:
var user = mongoose.Schema({
name: 'string'
});
var eventSchema = mongoose.Schema({
title: 'string',
propietary_id: 'String'
});
var commentSchema = mongoose.Schema({
text: 'string',
event_id : { type : mongoose.Schema.Types.ObjectId, ref : 'events' },
user_id : { type : mongoose.Schema.Types.ObjectId, ref : 'users' },
createdAt: {type: Date, default: Date.now }
});
How it works? On my website there is an event list and if you want to see comments you have to click every event, then angularjs gets all comments (text, user name and user photo) of the selected events.
There are pros and cons with both solutions and the best one for you depends on your usage. Remember that you can produce exactly the same API independent of your design it only comes down to how quickly and easily you can maintain the backend. First some thoughts on both designs:
First design:
First a comment, I wouldn't save comments as a nested document but as an array instead. Otherwise you are limited to one comment per event. Use this schema instead:
comments: [
{
text: { type: String },
user: { type: mongoose.Schema.Types.ObjectId, ref : 'users' },
createdAt: { type: Date, default: Date.now },
}
]
Pros:
No need for multiple collections
You will have the comments returned with the event in the get request which will mean less requests to your backend
No need to map comments to events
Cons:
You will have the comments returned to you with the event, even if you don't want them displayed
If there are a lot of comments to an event, the request response will be pretty large
If you want to remove or edit comments in your array it will be trickier (not impossible though)
Second design:
Pros:
You will have the events and comments separated which means leaner objects
You can much easier extract one comment for edit or delete
You can more easily get events without comments and then request comments at another point
Cons:
You will need to always map comments to events which will mean more code
Two collections will mean two requests usually
Maintenance of another collection
Verdict:
All the pros and cons are judged by how much extra code you need to write. Of course you can always have comments returned with your events in the second design as well but then you will have the extract the comments first and returned them with the event object which will mean extra code to maintain.
I think the second design would work better for you. I'm judging this by your comment that you will only need comments if the user click on an event. I would then be requesting the events first and do another request for comments as soon as the user click on the event, however, having the comments always be returned with the events should make the ui more snappy as the comments will already have been loaded.
It all depends in the end what is more important for you to do with the data. Please let me know if you have any questions on any of the points.
I'm performing a remote filter to a store.
I code something like this:
myStore.load({
limit: 8,
foo: 'foo is never sent',
filters:[{'property':'some property','value':30,'comparison':'lt','field':'age'}]
});
It ends up sending to the server using GET method, with parameters below: (from chrome/firebug)
_dc:1327757119914
page:1
start:0
limit:8
filter:[{"property":"some property","value":30}]
requested URL:
myServerPage.php?_dc=1327757119914&page=1&start=0&limit=8&filter=%5B%7B%22property%22%3A%22some%20property%22%2C%22value%22%3A30%7D%5D
the 'foo' is missing, and more importantly, in the passing 'filter' object, only 'property' and 'value' was sent. (I think these two are predefined, filter config does not accept other keys and values)
How can I send my own parameters to the server using load(), especially in the 'filters' part?
another way:
myStore.getProxy().extraParams= {search: "something"}
myStore.load({
params: {
foo: 'foo'
}
})