I have an object type entity that can be related to another object type something through a list entityInstances. The something is also a in a list within somethingContainer:
query somethingContainer {
id
somethings {
id
entityInstances {
id
entity {
id
}
}
}
}
The entitys can appear in several entityInstances, but the instances are unique to their somethings, and so are the somethings to their containers.
The issue comes when deleting an entity. Then each of the queries that contain the entity (nested within the 3 layers) need to be updated, but it's not obvious which queries that would be.
I've looked into using cache.modify but I can't see how to use it without knowing the id's of the relevant entityInstances (or those of the relevant somethings).
Before I've used a little soft delete hack where I use writeFragment to set the id of the relevant entity to null, and filtering out these in components. But this never felt like a good solution, and it's even worse with the upgrade to Apollo 3 (we log an error whenever a query or mutation has no id, which after the upgrade also logs at writeFragment).
Another option is to do something like rerunning all somethingContainer queries but that can get expensive.
Any ideas?
Related
I'm managing a company website, where we have to display our products. We however do not want to handle the admin edit for this CPT, nor offer the ability to access to the form. But we have to read some product data form the admin edit page. All has to be created or updated via our CRM platform automatically.
For this matter, I already setup a CPT (wprc_pr) and registered 6 custom hierarchical terms: 1 generic for the types (wprc_pr_type) and 5 targeting each types available: wprc_pr_rb, wprc_pr_sp, wprc_pr_pe, wprc_pr_ce and wprc_pr_pr. All those taxonomies are required for filtering purposes (was the old way of working, maybe not the best, opened to suggestions here). We happen to come out with archive pages links looking like site.tld/generic/specific-parent/specific-child/ which is what is desired here.
I have a internal tool, nodeJS based, to batch create products from our CRM. The job is simple: get all products not yet pushed to the website, format a new post, push it to the WP REST API, wait for response, updated CRM data in consequence, and proceed to next product. Handle about 1600 products today on trialn each gone fine
The issue for now is that in order for me to put the correct terms to the new post, I have to compute for each product the generic type and specific type children.
I handled that by creating 6 files, one for each taxonomy. Each file is basically a giant JS object with the id from the CRM as a key, and the term id as a value. My script handles the category assertion like that:
wp_taxonomy = [jsTaxonomyMapper[crm_id1][crm_id2]] // or [] if not found
I have to say it is working pretty well, and that I could stop here. But I will have to take that computing to the wp_after_insert_post hook, in order to reaffect the post to the desired category on updated if something changed on the CRM.
Not quite difficult, but if I happen to add category on the CRM, I'll have to manually edit my mappers to add the new terms, and believe me that's a hassle.
Not waiting for a full solution here, but a way to work the thing. Maybe a way to computed those mappers and store their values in the options table maybe, or have a mapper class, I don't know at all.
Additional information:
Data from the CRM comes as integers (ids corresponding to a label) and the mappers today consist of 6 arrays (nested or not), about 600 total entries.
If you have something for me, or even suggestions to simplify the process, I'll go with it.
Thanks.
EDIT :
Went with another approach, see comment below.
This is more of a open question but hopefully it won’t get deleted.
I am using react and apollo although the question is more general.
Let’s say I have 3 distinct views in my app all using similar (but not the same) data.
All of them are using separate queries but each of the query uses common operation but with slightly different data returned.
Let’s say I have a mutation somewhere that adds something to data (think of a list of items and a new item being added).
Let’s say after mutation I want to update cache to reflect that change. I am using read/writeQuery to do the update.
With this setup I need to update 3 queries - this becomes a maintenance nightmare.
After some reading I figured I am doing this wrong - I have now created a single query - now I need to only update that single query after mutation and all of my views are updated automatically.
However the problem is that this query now has to download all the data that all 3 views combined need - feels like this is very inefficient, because some of the views will get data they'll never use.
Is there a better way to do it?
Please note that read/writeFragment won't work because they won't update the underlying queries - check this answer for example: https://stackoverflow.com/a/50349323/2874705
Please let me know in comment if you need a more concrete example.
All in all I think in this setup I would just be better with a global state handling and avoid apollo cache all together - however I feel cheated cause apollo promised to solve the state problems :)
EDIT
Here's a concerete example:
Let's say our graphql schema is defined liked this:
type Post {
id: ID!
title: String!
body: String
published: Boolean!
}
type Query {
posts(published: Boolean): [Post!]!
}
type Mutation {
createDraft(body: String!, title: String!): Post
publish(id: Int!): Post
}
Now, we create 3 queries and 2 mutations on the client
query PostTitles {
posts {
id
title
}
}
query Posts {
posts {
id
title
body
published
}
}
query PublishedPosts {
posts (published: true) {
id
title
body
published
}
}
mutation CreateDraftPost ($body: String!, $title: String!) {
createDraft(body: $body, title: $title) {
id
title
body
published
}
}
mutation PublishPost ($id:ID!) {
publish (id: $id) {
id
published
}
}
Just to note createDraft creates a post with the default false published value.
How can use either of those mutations to create or publish a post and have all the 3 cached queries to be updated without using refetchQueries or manualy updating each of the query?
I think the real problem is that each of those queries are stored separately in the apollo in-memory cache.
From my experience, here's how it should goes.
In the case of CreateDraftPost mutation:
You call the mutation and also pass an update function. In this update function, you modify the cache of the root query posts by creating a new fragment of Post and then add this fragement into posts. See this: https://www.apollographql.com/docs/react/data/mutations/#making-all-other-cache-updates
Since the PostTitles and Posts all rely on the root query posts (just differ in the queried fields) and the new fragment of Post you've just added into posts has sufficient fields, your PostTitles and Posts should automatically reflect the changes.
Since CreateDraftPost always create a draft with published defaults to false. You don't need to update anything related to PublishedPosts query.
In the case of PublishPost mutation:
You call the mutation and the returned result is a Post with updated fields (id, published). By the mechanism of Apollo GraphQL cache, this Post (identified by id) will be updated in any queries it has involved. See this: https://www.apollographql.com/docs/react/data/mutations/#updating-a-single-existing-entity
However, you need to manually update the PublishedPost query. Do this by providing update function in the mutation call. In this update function, you will readQuery of PublishedPost first, create a new Post out of the returned data and finally writeQuery to add this post into the PublishedPost results. Reference this: https://www.apollographql.com/docs/react/caching/cache-interaction/#combining-reads-and-writes
How about using refetchQueries:
In the case of CreateDraftPost mutation, refetch only Posts query should be sufficient (the PostTitles should be updated accordingly) since both Posts and PostTitles rely on the same root query posts and fields in Posts has also covered fields in PostTitles
In the case of PublishPost mutation, I would prefer refetch the PublishedPost query to avoid doing the whole update thing (since I'm lazy and I think it will not cost me much to refetch 1 query)
It sounds like you've looked into and used the update argument that can be passed to mutation functions returned from useMutation. You're probably using proxy.readQuery and proxy.writeQuery to update it (or letting this magic happen in the background). If not, here is the documentation.
Another approach that is similar in concept but finer detail is to use proxy.readFragment and proxy.writeFragment. You can specify a set of properties on a type as being part of a fragment, and update that fragment whenever new data comes in. The nice part is that this fragment can be used within any number of queries, and if you update the fragment, those queries will update.
fragment documentations
This one is making me crasy : I have an EF model built upon a database that contains a table named Category with 6 rows in it.
I want to display this in a drop down list in WPF, so I need to bind it to the Categories.Local Observable collection.
The problem is that this observable collection never receives the content of the database table. My understanding is that the collection should get in sync with the database when performing a query or saving data with SaveChanges() So I ran the followin 2 tests :
Categories = _db.Categories.Local;
// test 1
Debug.WriteLine(_db.Categories.Count());
Debug.WriteLine(_db.Categories.Local.Count());
// test 2
_categories.Add(new Category() { CategoryName = "test" });
_db.SaveChanges();
Debug.WriteLine(_db.Categories.Count());
Debug.WriteLine(_db.Categories.Local.Count());
Debug.WriteLine(_categories.Count());
The test 1 shows 6 rows in the database, and 0 in local.
The test 2 shows 7 rows in the database, and 1 in local (both versions)
I also atempted to use _db.Category.Load() but as expected, it doesn't work because it is db first, not code first.
I also went through this page https://msdn.microsoft.com/en-us/library/jj574514(v=vs.113).aspx, created an object-based data source and linked my combo box to it, without success.
Does anyone know what I am doing wrong?
Thank you in advance for your help.
The DbSet<T> class is IQueryable<T>, hence DbSet<T>.Count() method maps to Queryable.Count<T> extension method, which in turn is translated to SQL query and returns the count of the records in the database table without loading anything into db context local cache.
While DbSet<T>.Local simply gives you access to the local cache. It contains the entities that you added as well as the ones being loaded by query that returns T instances (or other entities referencing T via navigation property). In order to fully load (populate) the local cache, you need to call Load:
_db.Categories.Load();
The Load is a custom extension method defined in QueryableExtensions class, so you need to include
using System.Data.Entity;
in order to get access to it (as well as to typed Include, XyzAsync and many other EF extension methods). The Load method is equivalent of ToList but without overhead of creating additional list.
Once you do that, the binding will work. Please note that the Local will not reflect changes made to the database through different DbContext instances or different applications/users.
This is a design question.
I'm trying to build a booking system in cakephp3.
I've never done something like this with cake before.
I thought the best way might be to -- as the post title suggests -- build up an entity over several forms/actions.
Something like choose location -> enter customer details -> enter special requirements -> review full details and pay
So each of those stages becomes an action within my booking controller. The view for each action submits its content to the next action in the chain, and i use patch entity with the request data, and send the result to the new action's view.
I've started to wonder if this is a good way to do it. One significant problem is that the data from each of the previous actions has to be stored in hidden fields so that it can be resubmitted with the new data from the current action.
I want the data from previous actions to be visible in a read only fashion so I've used the entity that i pass to the view to fill an HTML table. That's nice and it works fine but having to also store that same data in hidden fields is not a very nice way to do it.
I hope this is making sense!
Anyway, I thought I'd post on here for some design guidance as i feel like there is probably a better way to do this. I have considered creating temporary records in the database and just passing the id but i was hoping I wouldn't have to.
Any advice here would be very much appreciated.
Cheers.
I would just store the entity in the DB and then proceed with your other views, getting data from the DB. Pseudo:
public function chooseLocation() {
$ent = new Entitiy();
patchEntity($ent,$this->request->data);
if save entity {
redirect to enterCustomerDetails($ent[id]);
}
}
public function enterCustomerDetails($id) {
$ent = $this->Modelname->get($id);
// patch, save, redirect again ...
}
I need a CouchDB view where I can get back all the documents that don't have an arbitrary field. This is easy to do if you know in advance what fields a document might not have. For example, this lets you send view/my_view/?key="foo" to easily retrieve docs without the "foo" field:
function (doc) {
var fields = [ "foo", "bar", "etc" ];
for (var idx in fields) {
if (!doc.hasOwnProperty(fields[idx])) {
emit(fields[idx], 1);
}
}
}
However, you're limited to asking about the three fields set in the view; something like view/my_view/?key="baz" won't get you anything, even if you have many docs missing that field. I need a view where it will--where I don't need to specify possible missing fields in advance. Any thoughts?
This technique is called the Thai massage. Use it to efficiently find documents not in a view if (and only if) the view is keyed on the document id.
function(doc) {
// _view/fields map, showing all fields of all docs
// In principle you could emit e.g. "foo.bar.baz"
// for nested objects. Obviously I do not.
for (var field in doc)
emit(field, doc._id);
}
function(keys, vals, is_rerun) {
// _view/fields reduce; could also be the string "_count"
return re ? sum(vals) : vals.length;
}
To find documents not having that field,
GET /db/_all_docs and remember all the ids
GET /db/_design/ex/_view/fields?reduce=false&key="some_field"
Compare the ids from _all_docs vs the ids from the query.
The ids in _all_docs but not in the view are those missing that field.
It sounds bad to keep the ids in memory, but you don't have to! You can use a merge sort strategy, iterating through both queries simultaneously. You start with the first id of the has list (from the view) and the first id of the full list (from _all_docs).
If full < has, it is missing the field, redo with the next full element
If full = has, it has the field, redo with the next full element
If full > has, redo with the next has element
Depending on your language, that might be difficult. But it is pretty easy in Javascript, for example, or other event-driven programming frameworks.
Without knowing the possible fields in advance, the answer is easy. You must create a new view to find the missing fields. The view will scan every document, one-by-one.
To avoid disturbing your existing views and design documents, you can use a brand new design document. That way, searching for the missing fields will not impact existing views you may be already using.