I would like to accomplish some sort of hybrid solution between ndb.get_multi() and Query().
I have a set of keys, that I can use with:
entities = ndb.get_multi(keys)
I would like to query, filter, and order these entities using Query() or some more efficient way than doing all myself in the Python code manually.
How do people go about doing this? I want something like this:
query = Entity.gql('WHERE __key__ in :1 AND prop1 = :2 ORDER BY prop2', keys, 'hello')
entities = query.fetch()
Edit:
The above code works just fine, but it seems like fetch() never uses values from cache, whereas ndb.get_multi() does. Am I correct about this? If not, is the gql+fetch method much worse than get_multi+manual processing?
There are no way to use a query on already fetched properties, unless you will write it by yourself, but all this stuff can be easily done with built-in python filters. Note that its more efficient to run a query if you have a big dataset, rather than get_multi hundreds of keys to get only 5 entities.
entities = ndb.get_multi(keys)
# filtering
entities = [e for e in entities if e.prop1 == 'bla' and e.prop2 > 3]
#sorting by multiple properties
entities = sorted(entities, key=lambda x: (x.prop1, x.prop2))
UPDATE: And yes, cache is only used when you receive your entity by key, it is not used when you query for entities.
Related
I am working currently on telecom analytics project and newbie in query optimisation. To show result in browser it takes a full minute while just 45,000 records are to be accessed. Could you please suggest on ways to reduce time for showing results.
I wrote following query to find call-duration of a person of age-group:
sigma=0
popn=len(Demo.objects.filter(age_group=age))
card_list=[Demo.objects.filter(age_group=age)[i].card_no
for i in range(popn)]
for card in card_list:
dic=Fact_table.objects.filter(card_no=card.aggregate(Sum('duration'))
sigma+=dic['duration__sum']
avgDur=sigma/popn
Above code is within for loop to iterate over age-groups.
Model is as follows:
class Demo(models.Model):
card_no=models.CharField(max_length=20,primary_key=True)
gender=models.IntegerField()
age=models.IntegerField()
age_group=models.IntegerField()
class Fact_table(models.Model):
pri_key=models.BigIntegerField(primary_key=True)
card_no=models.CharField(max_length=20)
duration=models.IntegerField()
time_8bit=models.CharField(max_length=8)
time_of_day=models.IntegerField()
isBusinessHr=models.IntegerField()
Day_of_week=models.IntegerField()
Day=models.IntegerField()
Thanks
Try that:
sigma=0
demo_by_age = Demo.objects.filter(age_group=age);
popn=demo_by_age.count() #One
card_list = demo_by_age.values_list('card_no', flat=True) # Two
dic = Fact_table.objects.filter(card_no__in=card_list).aggregate(Sum('duration') #Three
sigma = dic['duration__sum']
avgDur=sigma/popn
A statement like card_list=[Demo.objects.filter(age_group=age)[i].card_no for i in range(popn)] will generate popn seperate queries and database hits. The query in the for-loop will also hit the database popn times. As a general rule, you should try to minimize the amount of queries you use, and you should only select the records you need.
With a few adjustments to your code this can be done in just one query.
There's generally no need to manually specify a primary_key, and in all but some very specific cases it's even better not to define any. Django automatically adds an indexed, auto-incremental primary key field. If you need the card_no field as a unique field, and you need to find rows based on this field, use this:
class Demo(models.Model):
card_no = models.SlugField(max_length=20, unique=True)
...
SlugField automatically adds a database index to the column, essentially making selections by this field as fast as when it is a primary key. This still allows other ways to access the table, e.g. foreign keys (as I'll explain in my next point), to use the (slightly) faster integer field specified by Django, and will ease the use of the model in Django.
If you need to relate an object to an object in another table, use models.ForeignKey. Django gives you a whole set of new functionality that not only makes it easier to use the models, it also makes a lot of queries faster by using JOIN clauses in the SQL query. So for you example:
class Fact_table(models.Model):
card = models.ForeignKey(Demo, related_name='facts')
...
The related_name fields allows you to access all Fact_table objects related to a Demo instance by using instance.facts in Django. (See https://docs.djangoproject.com/en/dev/ref/models/fields/#module-django.db.models.fields.related)
With these two changes, your query (including the loop over the different age_groups) can be changed into a blazing-fast one-hit query giving you the average duration of calls made by each age_group:
age_groups = Demo.objects.values('age_group').annotate(duration_avg=Avg('facts__duration'))
for group in age_groups:
print "Age group: %s - Average duration: %s" % group['age_group'], group['duration_avg']
.values('age_group') selects just the age_group field from the Demo's database table. .annotate(duration_avg=Avg('facts__duration')) takes every unique result from values (thus each unique age_group), and for each unique result will fetch all Fact_table objects related to any Demo object within that age_group, and calculate the average of all the duration fields - all in a single query.
I have a list of entities I'm loading into my front-end. If I don't these entities yet in my NDB, I load them from another data source. If I do have them in my NDB, I obviously load them from there.
Instead of querying for every key separately to test whether it exists, I'd like to query for the whole list (for efficiency reasons) and find out what IDs exist in the NDB and what don't.
It could return a list of booleans, but any other practical solution is welcome.
Thanks already for your help!
How about doing a ndb.get_multi() with your list, and then comparing the results with your original list to find what you need to retrieve from the other data source? Something like this perhaps...
list_of_ids = [1,2,3 ... ]
# You have to use the keys to query using get_multi() (this is assuming that
# your list of ids are also the key ids in NDB)
keys_list = [ndb.key('DB_Kind', x) for x in list_of_ids]
results = ndb.get_multi(keys_list)
results = [x for x in results if x is not None] # Get rid of any Nones
result_keys = [x.key.id() for x in results]
diff = list(set(list_of_ids) - set(result_keys)) # Get the difference in the lists
# Diff should now have a list of ids that weren't in NDB, and results should have
# a list of the entities that were in NDB.
I can't vouch for the performance of this, but it should be more efficient then querying for each entity one at a time. In my experience using ndb.get_multi() is a huge performance booster, since it cuts down on a huge amount of RPCs. You could likely tweak the code that I posted above, but perhaps it will at least point you in the right direction.
I want to get an entity key knowing entity ID and an ancestor.
ID is unique within entity group defined by the ancestor.
It seems to me that it's not possible using ndb interface. As I understand datastore it may be caused by the fact that this operation requires full index scan to perform.
The workaround I used is to create a computed property in the model, which will contain the id part of the key. I'm able now to do an ancestor query and get the key
class SomeModel(ndb.Model):
ID = ndb.ComputedProperty( lambda self: self.key.id() )
#classmethod
def id_to_key(cls, identifier, ancestor):
return cls.query(cls.ID == identifier,
ancestor = ancestor.key ).get( keys_only = True)
It seems to work, but are there any better solutions to this problem?
Update
It seems that for datastore the natural solution is to use full paths instead of identifiers. Initially I thought it'd be too burdensome. After reading dragonx answer I redesigned my application. To my suprise everything looks much simpler now. Additional benefits are that my entities will use less space and I won't need additional indexes.
I ran into this problem too. I think you do have the solution.
The better solution would be to stop using IDs to reference entities, and store either the actual key or a full path.
Internally, I use keys instead of IDs.
On my rest API, I used to do http://url/kind/id (where id looked like "123") to fetch an entity. I modified that to provide the complete ancestor path to the entity: http://url/kind/ancestor-ancestor-id (789-456-123), I'd then parse that string, generate a key, and then get by key.
Since you have full information about your ancestor and you know your id, you could directly create your key and get the entity, as follows:
my_key = ndb.Key(Ancestor, ancestor.key.id(), SomeModel, id)
entity = my_key.get()
This way you avoid making a query that costs more than a get operation both in terms of money and speed.
Hope this helps.
I want to make a little addition to dargonx's answer.
In my application on front-end I use string representation of keys:
str(instance.key())
When I need to make some changes with instence even if it is a descendant I use only string representation of its key. For example I have key_str -- argument from request to delete instance':
instance = Kind.get(key_str)
instance.delete()
My solution is using urlsafe to get item without worry about parent id:
pk = ndb.Key(Product, 1234)
usafe = LocationItem.get_by_id(5678, parent=pk).key.urlsafe()
# now can get by urlsafe
item = ndb.Key(urlsafe=usafe)
print item
The documentation for the IN query operation states that those queries are implemented as a big OR'ed equality query:
qry = Article.query(Article.tags.IN(['python', 'ruby', 'php']))
is equivalent to:
qry = Article.query(ndb.OR(Article.tags == 'python',
Article.tags == 'ruby',
Article.tags == 'php'))
I am currently modelling some entities for a GAE project and plan on using these membership queries with a lot of possible values:
qry = Player.query(Player.facebook_id.IN(list_of_facebook_ids))
where list_of_facebook_ids could have thousands of items.
Will this type of query perform well with thousands of possible values in the list? If not, what would be the recommended approach for modelling this?
This won't work with thousands of values (in fact I bet it starts degrading with more than 10 values). The only alternative I can think of are some form of precomputation. You'll have to change your schema.
One way you can you do it is to create a new model called FacebookPlayer which is an index. This would be keyed by facebook_id. You would update it whenever you add a new player. It looks something like this:
class FacebookUser(ndb.Model):
player = ndb.KeyProperty(kind='Player', required=True)
Now you can avoid queries altogether. You can do this:
# Build keys from facebook ids.
facebook_id_keys = []
for facebook_id in list_of_facebook_ids:
facebook_id_keys.append(ndb.Key('FacebookPlayer', facebook_id))
keysOfUsersMatchedByFacebookId = []
for facebook_player in ndb.get_multi(facebook_id_keys):
if facebook_player:
keysOfUsersMatchedByFacebookId.append(facebook_player.player)
usersMatchedByFacebookId = ndb.get_multi(keysOfUsersMatchedByFacebookId)
If list_of_facebook_ids is thousands of items, you should do this in batches.
I know you can pass a key or a range to return records in CouchDB, but I want to do something like this. Find X records that are X values.
So for example, in regular SQL, lets say I wanted to return records with ids that are 5, 7, 29, 102. I would do something like this:
SELECT * FROM sometable WHERE id = 5 OR id = 7 or id = 29 or id = 102
Is it possible to do this in CouchDB, where I toss all the values I want to find in the key array, and then CouchDB searches for all of those records that could exist in the "key parameter"?
You can do a POST as documented on CouchDB wiki. You pass the list of keys in the body of the request.
{"keys": ["key1", "key2", ...]}
The downside is that a POST request is not cached by the browser.
Alternatively, you can obtain the same response using a GET with the keys parameter. For example, you can query the view _all_docs with:
/DB/_all_docs?keys=["ID1","ID2"]&include_docs=true
which, properly URL encoded, becomes:
/DB/_all_docs?keys=%5B%22ID1%22,%22ID2%22%5D&include_docs=true
this should give better cacheability, but keep in mind that _all_docs changes at each doc update. Sometimes, you can workaround this by defining your own view with only the needed documents.
With a straight view function, this will not be possible. However, you can use a _list function to accomplish the same result.