How to solve "N+1" problem in Keystone.js - query-optimization

It seems, that Keystone.js does't provides a solution of "N+1" problem.
Maybe, they are some plugins for that?

You can try to check https://www.keystonejs.com/guides/cache-hints/
This is a typical pattern for solving this problem.
You can try:
const app = new GraphQLApp({
apollo: {
cacheControl: {
defaultMaxAge: 3600,
},
},
});
to cache all resolver results.

What version are you on? The current version of KeystoneJS (Keystone Next) is build on top of Prisma which should be building fairly performant DB queries. If there's a specific GraphQL query you're preforming that's resulting in a suboptimal SQL it may represent a bug in either Keystone or Prisma code, in which case it'd be great if you could isolate the problem and log an issue.
If you're adding hooks, access control or virtual fields that query the DB it is possible to encounter the N+1 problem as those functions can be called for each item returned in a query. For example, this code, taken from the virtual-fields example causes N+1 queries if used as written:
Post: list({
fields: {
// [... various fields ...]
author: relationship({ ref: 'Author.posts', many: false }),
// A virtual field which uses `item` and `context` to query data.
authorName: virtual({
field: schema.field({
type: schema.String,
async resolve(item, args, context) {
const { author } = await context.lists.Post.findOne({
where: { id: item.id },
query: 'author { name }',
});
return author && author.name;
},
}),
}),
},
}),
Here, the resolver function for the authorName field will be called for each item loaded (assuming the field is queried). In these cases I'd suggest using something like the GraphQL dataloader (or similar) over the top of the Keystone CRUD API. If used correctly, dataloader can combine multiple queries and resolve the N+1 behaviour.

Related

Cannot return documents based off a sorted index using Fauna DB

I'm bumbling my way through adding a back-end to my site and have decided to get acquainted with graphQL. I may be structuring things totally the wrong way, however from following some tutorials I have a React front-end (hosted on Vercel), so I have created an api folder in my app to make use of Vercel's serverless functions. I'm using Apollo server and I decided to go with Fauna as my database.
I've successfully been able to return an entire collection via my API. Now I wish to be able to return the collection sorted by my id field.
To do this I created an index which looks like this:
{
name: "sort_by_id",
unique: false,
serialized: true,
source: "my_first_collection",
values: [
{
field: ["data", "id"]
},
{
field: ["ref"]
}
]
}
I then was able to call this via my api and get back and array, which simply contained the ID + ref, rather than the associated documents. I also could only console log it, I assume because the resolver was expecting to be passed an array of objects with the same fields as my typedefs. I understand I need to use the ref in order to look up the documents, and here is where I'm stuck. An index record looks as follows:
[1, Ref(Collection("my_first_collection"), "352434683448919125")]
In my resolvers.js script, I am attempting to receive the documents of my sorted index list. I've tried this:
async users() {
const response = await client.query(
q.Map(
q.Paginate(
q.Match(
q.Index('sort_by_id')
)
),
q.Lambda((ref) => q.Get(ref))
)
)
const res = response.data.map(item => item.data);
return [... res]
}
I'm unsure if the problem is with how I've structured my index, or if it is with my code, I'd appreciate any advice.
It looks like you also asked this question on the Fauna discourse forums and got an answer there: https://forums.fauna.com/t/unable-to-return-a-list-of-documents-via-an-index/3511/2
Your index returns a tuple (just an array in Javascript) of the data.id field and the ref. You confirmed that with your example result
[
/* data.id */ 1,
/* ref */ Ref(Collection("my_first_collection"), "352434683448919125")
]
When you map over those results, you need to Get the Ref. Your query uses q.Lambda((ref) => q.Get(ref)) which passes the whole tuple to Get
Instead, use:
q.Lambda(["id", "ref"], q.Get(q.Var("ref")))
// or with JS arrow function
q.Lambda((id, ref) => q.Get(ref))
or this will work, too
q.Lambda("index_entry", q.Get(q.Select(1, q.Var("index_entry"))))
// or with JS arrow function
q.Lambda((index_entry) => q.Get(q.Select(1, index_entry)))
The point is, only pass the Ref to the Get function.

Apollo Client readFragment with custom id (keyFields)

For ref, using "#apollo/client": "^3.5.5",
I've defined my typePolicies like so as suggested in docs:
HistoricalData: {
keyFields: ["variable", "workspace"],
fields:{...}
}
and when my cache is built, I am expecting my cacheId to be like
<__typename>:<id>:<id>
HistoricalData:${props.variable}:${props.workspace}`;
but instead, when I look in the Apollo cache, it's been created using the keyField names and the values in an object, such as
HistoricalData:{"variable":"GAS.TOTAL","workspace":"ABC"}
instead of
HistoricalData:GAS.TOTAL:ABC
so when I try to readFragment it returns null
client.readFragment({
id: `HistoricalData:${props.variable}:${props.workspace}`,
fragment: apolloGQL`fragment MyHistorical on Historical {
variable
workspace
}`})
It does actually return a value from the cache if I create my id in the structure that exists in the cache and readFragment using this.
Has anyone else noticed that Apollo client is not creating the cache id's in the structure that they describe in the docs?
After some research I came upon the correct way to handle this case. I know that you have already moved on, but just in case anyone else has the same problem in the future, here goes:
As described in the documentation for customizing the cache ID, the cache ID will be an stringified object, as you pointed out. It's not quite explicit in the documentation, but at this point in time it provides this nested example for a cache ID:
Book:{"title":"Fahrenheit 451","author":{"name":"Ray Bradbury"}}
But as users we don't have to preoccupy us with the format of this ID, because there's a helper for that, called cache.identify.
For your specific case, you could use something like this:
const identifiedId = cache.identify({
__typename: 'HistoricalData',
variable: 'GAS.TOTAL',
workspace: 'ABC'
});
cache.readFragment({
id: identifiedId,
fragment: apolloGQL`fragment MyHistorical on Historical {
variable
workspace
}`
});

Structure: How to represent a search input, search query, and search results using mobx-state-tree?

I've got an app using mobx-state-tree that currently has a few simple stores:
Article represents an article, either sourced through a 3rd party API or written in-house
ArticleStore holds references to articles: { articles: {}, isLoading: bool }
Simple scenario
This setup works well for simple use-cases, such as fetching articles based on ID. E.g.
User navigates to /article/{articleUri}
articleStoreInstance.fetch([articleUri]) returns the article in question
The ID is picked up in render function, and is rendered using articleStoreInstance.articles.get(articleUri)
Complex scenario
For a more complex scenario, if I wanted to fetch a set of articles based on a complex query, e.g. { offset: 100, limit: 100, freeTextQuery: 'Trump' }, should I then:
Have a global SearchResult store that simply links to the articles that the user has searched for
Instantiate a one-time SearchResult store that I pass around for as long as I need it?
Keep queries and general UI state out of stores altogether?
I should add that I'd like to keep articles in the stores between page-loads to avoid re-fetching the same content over and over.
Is there a somewhat standardized way of addressing this problem? Any examples to look at?
What you need might be a Search store which keeps track of following information:
Query params (offset, limit, etc.)
Query results (results of the last search)
(Optional) Query state (isLoading)
Then to avoid storing articles in 2 places, the query results should not use Article model, but reference to Article model. Anytime you query, the actual result will be saved in existing store ArticleStore, and Search only holds references:
import { types, getParent, flow } from 'mobx-state-tree'
const Search = types.model({
params: // your own params info
results: types.array(types.reference(Article))
}).views(self => ({
get parent() {
return getParent(self) // get root node to visit ArticleStore
}
})).actions(self => ({
search: flow(function*(params) {
this.params = params // save query params
const result = yield searchByQuery(query) // your query here
this.parent.articleStore.saveArticles(result) // save result to ArticleStore
this.results = getArticleIds(result) // extract ids here for references
})
}))
Hope it's what you are looking for.

Issue with .populate() on array of arrays in Mongoose Model [duplicate]

In Mongoose, I can use a query populate to populate additional fields after a query. I can also populate multiple paths, such as
Person.find({})
.populate('books movie', 'title pages director')
.exec()
However, this would generate a lookup on book gathering the fields for title, pages and director - and also a lookup on movie gathering the fields for title, pages and director as well. What I want is to get title and pages from books only, and director from movie. I could do something like this:
Person.find({})
.populate('books', 'title pages')
.populate('movie', 'director')
.exec()
which gives me the expected result and queries.
But is there any way to have the behavior of the second snippet using a similar "single line" syntax like the first snippet? The reason for that, is that I want to programmatically determine the arguments for the populate function and feed it in. I cannot do that for multiple populate calls.
After looking into the sourcecode of mongoose, I solved this with:
var populateQuery = [{path:'books', select:'title pages'}, {path:'movie', select:'director'}];
Person.find({})
.populate(populateQuery)
.execPopulate()
you can also do something like below:
{path:'user',select:['key1','key2']}
You achieve that by simply passing object or array of objects to populate() method.
const query = [
{
path:'books',
select:'title pages'
},
{
path:'movie',
select:'director'
}
];
const result = await Person.find().populate(query).lean();
Consider that lean() method is optional, it just returns raw json rather than mongoose object and makes code execution a little bit faster! Don't forget to make your function (callback) async!
This is how it's done based on the Mongoose JS documentation http://mongoosejs.com/docs/populate.html
Let's say you have a BookCollection schema which contains users and books
In order to perform a query and get all the BookCollections with its related users and books you would do this
models.BookCollection
.find({})
.populate('user')
.populate('books')
.lean()
.exec(function (err, bookcollection) {
if (err) return console.error(err);
try {
mongoose.connection.close();
res.render('viewbookcollection', { content: bookcollection});
} catch (e) {
console.log("errror getting bookcollection"+e);
}
//Your Schema must include path
let createdData =Person.create(dataYouWant)
await createdData.populate([{path:'books', select:'title pages'},{path:'movie', select:'director'}])

Meteor inserting into a collection schema with array elements

Hi I created a SimpleSchema for a Mongo collection which has a variable number of sub-documents called measurables. Unfortunately it's been a while since I've done this and I can't remember how to insert into this type of schema! Can someone help me out?
The schema is as follows:
const ExerciseTemplates = new Mongo.Collection('ExerciseTemplates');
const ExerciseTemplateSchema = new SimpleSchema({
name: {
type: String,
label: 'name',
},
description: {
type: String,
label: 'description',
},
createdAt: {
type: Date,
label: 'date',
},
measurables: {
type: Array,
minCount: 1,
},
'measurables.$': Object,
'measurables.$.name': String,
'measurables.$.unit': String,
});
ExerciseTemplates.attachSchema(ExerciseTemplateSchema);
The method is:
Meteor.methods({
addNewExerciseTemplate(name, description, measurables) {
ExerciseTemplates.insert({
name,
description,
createdAt: new Date(),
measurables,
});
},
});
The data sent by my form for measurables is an array of objects.
The SimpleSchema docs seem to be out of date. If I use the example they show with measurables: type: [Object] for an array of objects. I get an error that the the type can't be an array and I should set it to Array.
Any suggestions would be awesome!!
Many thanks in advance!
edit:
The measurable variable contains the following data:
[{name: weight, unit: kg}]
With the schema above I get no error at all, it is silent as if it was successful, but when I check the db via CLI I have no collections. Am I doing something really stupid? When I create a new meteor app, it creates a Mongo db for me I assume - I'm not forgetting to actually create a db or something dumb?
Turns out I was stupid. The schema I posted was correct and works exactly as intended. The problem was that I defined my schema and method in a file in my imports directory, outside both client and server directories. This methods file was imported into the file with the form that calls the method, and therefore available on the client, but not imported into the server.
I guess that the method was being called on the client as a stub so I saw the console.log firing, but the method was not being called on the server therefore not hitting the db.
Good lesson for me regarding the new recommended file structure. Always import server side code in server/main.js!!! :D
Thanks for your help, thought I was going to go mad!

Resources