My use case involved 2 kinds, Customers and Orders.
From the docs I read that we can have descendants, the example shows persons as the kind. In my case I want a customer to have a bunch of orders underneath it. I want to try it out in the console before diving in but I can't seem to be able to set the customer as a key to the order. Any help?
This picture shows the Customer I have made. Note the id.
Here is the Order that I want as a descendant of the customer.
As you can see here I tried to put the customerID as a key, but the Ancestor path still points to the order itself.
Is this just a limitation of the console?
Also, if I try it in code, how can I refer to this specific datastore and namespace? I'm going to be doing this in java.
DatastoreService datastore = DatastoreServiceFactory.getDatastoreService();
This looks like it's just making a new datastore.
You can't set the ancestor of an existing entity. The ancestor is part of the ID, and must be set on creation; you can't change the ID once it's created.
There is only one datastore. That code just creates an instance of the client.
Related
I am currently exploring MongoDB.
I built a notes web app and for now the DB has 2 collections: notes and users.
The user can create, read and update his notes.
I want to create a page called /my-notes that will display all the notes that belong to the connected user.
My question is:
Should the notes model has an ownerId field or the opposite - the user model will have a field of noteIds of type list.
Points I found relevant for the decision making:
noteIds approach:
There is no need to query the notes that hold the desired ownerId (say we have a lot of notes then we will need indexes and search accross the whole notes collection). We just need to find the user by user ID and then get all the notes by their IDs.
In this case there are 2 calls to DB.
The data is ordered by the order of insertion to the notesIds field in the document.
ownerId approach:
We do need to find the notes by their ownerId field across the notes collection which might be more computer "intensive".
We can paginate / sort the data as we want - more control over the data.
Are there any more points you can think of?
As I can conclude this is a question of whether you want less computer intensive DB calls vs more control over the data.
What are the "best practices"?
Thanks,
A similar use case is explained in the documentation. If there is no limit on number of notes a user can have, it might be better to store a userId reference field in notes document.
As you've figured out already, pagination would be easier in the second approach. Also when updating notes, you can simply updateOne({ _id: "note_id", userId: 1 }) instead of checking user's document if the note actually belong to the user.
In my datastore I have an entity Book that has reference to Owner that has reference to ContactInfo which has a property zipcode on it. I want to query for all books within a certain zipcode. How can I do this? I understand I can't write a query where I can do:
q = db.Query(Book).filter('owner.contact_info.zipcode =', 12345)
This is exactly the sort of thing you cannot do with the App Engine datastore. It is not a relational database, and you cannot query it as one. One of the things this implies is that it does not support JOINs, and you cannot do queries across entity types.
Because of this, it is usually not a good idea to follow the full normalized form in creating your data models. Unless you have a very good reason for keeping them separate, ContactInfo should almost certainly be merged with Owner. You might also want to define a repeated ReferenceProperty on Owner that records books_owned: then you can do a simple query and some gets to get all the books:
owners = db.Query(Owner).filter('zipcode', 12345)
books = []
for owner in owners:
book_ids.extend(owner.books_owned)
books = db.get(book_ids)
Edit the field would look like this:
class Owner(db.Model):
...
books_owned = db.ListProperty(db.Key)
If you update the schema, nothing happens to the existing entities: you will need to go through them (perhaps using the remote API) and update them to add the new data. Note though that you can just set the properties directly, there's no database migration to be done.
If contact info is a separate model, you will first need to find all ContactInfo entities with zipcode == 12345, then find all of the Owner entities that reference those ContactInfo entities, then find all the Book entities that reference those Owner entities.
If you're still able to change your model definitions at all, it would probably be wise to denormalize at least ContactInfo in the Owner model, and possibly also the Owner inside each Book.
How can I get the latest entry of a model new putted into NDB?
1: If I use a same parent key ? How to ?
I see the document write
Entities whose keys have the same root form an entity group or group.
If entities are in different groups, then changes to those entities
might sometimes seem to occur "out of order". If the entities are
unrelated in your application's semantics, that's fine. But if some
entities' changes should be consistent, your application should make
them part of the same group when creating them.
Is this means , with the same parent key the order is insert order?
But , how to get the last one ?
2: If I not use a same parent key (the model is same)? How to ?
If you're OK with eventual consistency (i.e. you might not see the very latest one immediately) you can just add a DateTimeProperty with auto_now_add=True and then run a query sorting by that property to get the latest one. (This is also approximate since you might have several entities saved close together which are ordered differently than you expect.)
If you need it to be exactly correct, the only way I can see is to create an entity whose job it is to hold a reference to the latest entry, and update that entity in the same transaction as the entry you're creating. Something like:
class LatestHolder(ndb.Model):
latest = ndb.KeyProperty('Entry')
# code to update:
#ndb.transactional(xg=True)
def put_new_entry(entry):
holder = LatestHolder.get_or_insert(name='fixed-key')
holder.latest = entry
holder.put()
entry.put()
Note that I've used a globally fixed key name here with no parent for the holder class. This is a bottleneck; you might prefer to make several LatestHolder entities with different parents if your "latest entry" only needs to be from a particular parent, in which case you just pass a parent key to get_or_insert.
Currently, a lot of my code makes extensive use of ancestors to put and fetch objects. However, I'm looking to change some stuff around.
I initially thought that ancestors helped make querying faster if you knew who the ancestor of the entity you're looking for was. But I think it turns out that ancestors are mostly useful for transaction support. I don't make use of transactions, so I'm wondering if ancestors are more of a burden on the system here than a help.
What I have is a User entity, and a lot of other entities such as say Comments, Tags, Friends. A User can create many Comments, Tags, and Friends, and so whenever a user does so, I set the ancestor for all these newly created objects as the User.
So when I create a Comment, I set the ancestor as the user:
comment = Comment(aUser, key_name = commentId)
Now the only reason I'm doing this is strictly for querying purposes. I thought it would be faster when I wanted to get all comments by a certain user to just get all comments with a common ancestor rather than querying for all comments where authorEmail = userEmail.
So when I want to get all comments by a certain user, I do:
commentQuery = db.GqlQuery('SELECT * FROM Comment WHERE ANCESTOR IS :1', userKey)
So my question is, is this a good use of ancestors? Should each Comment instead have a ReferenceProperty that references the User object that created the comment, and filter by that?
(Also, my thinking was that using ancestors instead of an indexed ReferenceProperty would save on write costs. Am I mistaken here?)
You are right about the writing cost, an ancestor is part of the key which comes "free". using a reference property will increase your writing cost if the reference property is indexed.
Since you query on that reference property if will need to be indexed.
Ancestor is not only important for transactions, in the HRD (the default datastore implementation) if you don't create each comment with the same ancestor, the quires will not be strongly consistent.
-- Adding Nick's comment ---
Every entity with the same parent will be in the same entity group, and writes to entity groups are serialized, so using ancestors here will slow things down iff you're writing multiple entities concurrently. Since all the entities in a group are 'owned' by the user that forms the root of the group in your instance, though, this shouldn't be a problem - and in fact, what you're doing is actually a recommended design pattern.
I am working on a system, which will run on GAE, which will have several related entities and I am not sure of the best way to store the data. This post is a request for advice from others who may have similar experience....
The system will have users, with profile data and an image. Those users will be able to create "events" and add journal entries to it. For the purpose of the system, the "events" will likely have 1 or 2 journal entries in them, and anything over 10 would likely never happen. Other users will be able to add comments to users' entries as well, where popular ones may have hundreds or even thousands of comments. When a random visitor uses the system, they should be able to see the latest events (latest, being defined by those with latest journal entries in them), search by tag, and a very perform basic text search. Then upon selecting an event to view, it should be displayed with all journal entries, and all user comments, with user images alongside comments. A user should also have a kind of self-admin page, to view/modify/delete their events and to view/modify/delete comments they have made on other events. So, doing all this on a normal RDBMS would just queries with some big joins across several tables. On GAE it would obviously need to work differently. Here are my initial thoughts on the design of the entities:
Event entity - id, name, timstamp, list
property of tags, view count,
creator's username, creator's profile
image id, number of journal entries
it contains, number of total comments
it contains, timestamp of last update to contained journal entries, list property of index words for search (built/updated from text from contained journal entries)
JournalEntry entity - timestamp,
journal text, name of event,
creator's username, creator's profile
image id, list property of comments
(containing commenter username and
image id)
User entity - username, password hash, email, list property of subscribed events, timestamp of create date, image id, number of comments posted, number of events created, number of journal entries created, timestamp of last journal activity
UserComment entity - username, id of event commented on, title of event commented on
TagData entity - tag name, count of events with tag on them
So, I'd like to hear what people here think about the design and what changes should be made to help it scale well. Thanks!
Rather than store Event.id as a property, use the id automatically embedded in each entity's key, or set unique key names on entities as you create them.
You have lots of options for modeling the relationship between Event and JournalEntry: you could use a ReferenceProperty, you could parent JournalEntries to Events and retrieve them with ancestor queries, or you could store a list of JournalEntry key ids or names on Event and retrieve them in batch with a key query. Try some things out with realistically-distributed dummy data, and use appstats to see what works best.
UserComment references an Event, while JournalEntry references a list of UserComments, which is a little confusing. Is there a relationship between UserComment and JournalEntry? or just between UserComment and Event?
Persisting so many counts is expensive. When I post a comment, you're going to write a new UserComment entity and also update my User entity and a JournalEntry entity and an Event entity. The number of UserComments you expect per Event makes it unwise to include everything in the same entity group, which means you can't do these writes transactionally, so you'll do them serially, and the entities might be stored across different network nodes, making the whole operation slow; and you'll also be open to consistency problems. Can you do without some of these counts and consider storing others in memcache?
When you fetch an Event from the datastore, you don't actually care about its list of search index words, and retrieving and deserializing them from protocol buffers has a cost. You can get around this by splitting each Event's search index words into a separate child EventIndex entity. Then you can query EventIndex on your search term, fetch just the EventIndex keys for EventIndexes that match your search, derive the corresponding Events' keys with key.parent(), and fetch the Events by key, never paying for the retrieval or deserialization of your search index word lists. Brett Slatkin explains this strategy here at 14:35.
Updating Event.viewCount will fail if you have a lot of views for any Event in rapid succession, so you should try out counter sharding.
Good luck, and tell us what you learn by trying stuff out.