How To Use Entity Groups And Ancestors with DjangoForms - google-app-engine

I would like to rewrite the example from the GAE djangoforms article to be show most up to date after submitting a form (e.g. when updating or adding a new entry) on Google App Engine using the High Replication Datastore.
The main recurring query in this article is:
query = db.GqlQuery("SELECT * FROM Item ORDER BY name")
which we will translate to:
query = Item.all().order('name') // datastore request
This query I would like to get the latest updated data from the high replication datastore after submitting the form (only in these occasions, I assume I can redirect to a specific urls after submission which just uses the query for the latest data and in all other cases I would not do this).
validating the form storing the results happens like:
data = ItemForm(data=self.request.POST)
if data.is_valid():
# Save the data, and redirect to the view page
entity = data.save(commit=False)
entity.added_by = users.get_current_user()
entity.put() // datastore request
and getting the latest entry from the datastore for populating a form (for editing) happens like:
id = int(self.request.get('id'))
item = Item.get(db.Key.from_path('Item', id)) // datastore request
data = ItemForm(data=self.request.POST, instance=item)
So how do I add entity groups/ancestor keys to these datastore queries to reflect the latest data after form submission. Please note, I don't want all queries to have the latest data, when populating a form (for editing) and after submitting a form.
Who can help me with practical code examples?

If it is in the same block, you have reference of the current intance.
Then once you put() it, you can get its id by:
if data.is_valid():
entity = data.save(commit=False)
entity.added_by = users.get_current_user()
entity.put()
id= entity.key().id() #this gives you inserted data id

Related

Wagtail - Search Page Owner Full Name

I am setting up a blog style documentation site. I was using a user input field for author when a child page was created. I found out that Wagtail houses owner in the page model. In the interest of not duplicating data, I removed my author field so I can use the default wagtail field. However, I have set up an LDAP module for authentication so the owner is housed as an Employee ID and not a user name. This Employee ID does map to a full name though and I am able to access that on a template via the owner.get_full_name.
So the question is, how do I set up the default search to check the owner full name when performing searches? How to get this into the search index? I am still a bit new to Wagtail so this may be a case of creating an author field with a foreign key mapping back to the user table or should I be modifying the search view to include a run through the user table?
def search(request):
search_query = request.GET.get('query', None)
page = request.GET.get('page', 1)
# Search
if search_query:
search_results = Page.objects.live().search(search_query)
query = Query.get(search_query)
# Record hit
query.add_hit()
else:
search_results = Page.objects.none()
# Pagination
paginator = Paginator(search_results, 10)
try:
search_results = paginator.page(page)
except PageNotAnInteger:
search_results = paginator.page(1)
except EmptyPage:
search_results = paginator.page(paginator.num_pages)
return TemplateResponse(request, 'search/search.html', {
'search_query': search_query,
'search_results': search_results,
})
If you check the Wagtail search documentation, it describes the process for indexing callables and extra attributes:
https://docs.wagtail.io/en/stable/topics/search/indexing.html#indexing-callables-and-other-attributes
So what I would do is:
Create a get_owner_full_name() method in your page class
Add an index.SearchField('get_owner_full_name') to the search_fields
One note though, this will only work if you are using either the PostgreSQL backend, or the Elasticsearch backend. The default database backend does not support the indexing of extra fields.

Modify and delete document in Cloudant IBM Database

Is it possible to modify document or delete in Cloudant Database by query?
I assume the questioner is looking for equivalent functionality to SQL's:
UPDATE db SET x = 10 WHERE y > 100;
If that is the question then the answer is that Cloudant does not have such functionality, only an atomic update operation.
The equivalent of the UPDATE statement could be achieved by combining a call to the Cloudant Query API and making updates using the bulk API.
Another option is to use the couchtato iterator tool which allows bulk changes to be made to Cloudant databases.
To delete a document you have to know the document _id, if you do not know the _id then you have to look for the document in your db, retrieve it and then get the _id.
To retrieve a document you can use a selector. For example, let's say that you have a document that contains a "name" field, your selector would be something like
selector = {"name": {"$eq": "name that you want"}}
So for python code you would have something like
def retrieve_db_data(db_name, selector):
client = connect_to_db()
db = client[db_name]
results = db.get_query_result(selector)
data = results.all()
return data
In "data" you will have the _id
then you can use something like this to delete the data
def delete_document(db_name, doc_id):
client = connect_to_db()
client.connect()
db = client[db_name]
document = db[doc_id]
document.delete()

ndb Models are not saved in memcache when using MapReduce

I've created two MapReduce Pipelines for uploading CSVs files to create Categories and Products in bulk. Each product is gets tied to a Category through a KeyProperty. The Category and Product models are built on ndb.Model, so based on the documentation, I would think they'd be automatically cached in Memcache when retrieved from the Datastore.
I've run these scripts on the server to upload 30 categories and, afterward, 3000 products. All the data appears in the Datastore as expected.
However, it doesn't seem like the Product upload is using Memcache to get the Categories. When I check the Memcache viewer in the portal, it says something along the lines of the hit count being around 180 and the miss count around 60. If I was uploading 3000 products and retrieving the category each time, shouldn't I have around 3000 hits + misses from fetching the category (ie, Category.get_by_id(category_id))? And likely 3000 more misses from attempting to retrieve the existing product before creating a new one (algorithm handles both entity creation and updates).
Here's the relevant product mapping function, which takes in a line from the CSV file in order to create or update the product:
def product_bulk_import_map(data):
"""Product Bulk Import map function."""
result = {"status" : "CREATED"}
product_data = data
try:
# parse input parameter tuple
byteoffset, line_data = data
# parse base product data
product_data = [x for x in csv.reader([line_data])][0]
(p_id, c_id, p_type, p_description) = product_data
# process category
category = Category.get_by_id(c_id)
if category is None:
raise Exception(product_import_error_messages["category"] % c_id)
# store in datastore
product = Product.get_by_id(p_id)
if product is not None:
result["status"] = "UPDATED"
product.category = category.key
product.product_type = p_type
product.description = p_description
else:
product = Product(
id = p_id,
category = category.key,
product_type = p_type,
description = p_description
)
product.put()
result["entity"] = product.to_dict()
except Exception as e:
# catch any exceptions, and note failure in output
result["status"] = "FAILED"
result["entity"] = str(e)
# return results
yield (str(product_data), result)
MapReduce intentionally disables memcache for NDB.
See mapreduce/util.py ln 373, _set_ndb_cache_policy() (as of 2015-05-01):
def _set_ndb_cache_policy():
"""Tell NDB to never cache anything in memcache or in-process.
This ensures that entities fetched from Datastore input_readers via NDB
will not bloat up the request memory size and Datastore Puts will avoid
doing calls to memcache. Without this you get soft memory limit exits,
which hurts overall throughput.
"""
ndb_ctx = ndb.get_context()
ndb_ctx.set_cache_policy(lambda key: False)
ndb_ctx.set_memcache_policy(lambda key: False)
You can force get_by_id() and put() to use memcache, eg:
product = Product.get_by_id(p_id, use_memcache=True)
...
product.put(use_memcache=True)
Alternatively, you can modify the NDB context if you are batching puts together with mapreduce.operation. However I don't know enough to say whether this has other undesired effects:
ndb_ctx = ndb.get_context()
ndb_ctx.set_memcache_policy(lambda key: True)
...
yield operation.db.Put(product)
As for the docstring about "soft memory limit exits", I don't understand why that would occur if only memcache was enabled (ie. no in-context cache).
It actually seems like you want memcache to be enabled for puts, otherwise your app ends up reading stale data from NDB's memcache after your mapper has modified the data underneath.
As Slawek Rewaj already mentioned this is caused by the in-context cache. When retrieving an entity NDB tries the in-context cache first, then memcache, and finally it retrieves the entity from datastore if it wasn't found neither in the in-context cache nor memcache. The in-context cache is just a Python dictionary and its lifetime and visibility is limited to the current request, but MapReduce does multiple calls to product_bulk_import_map() within a single request.
You can find more information about the in-context cache here: https://cloud.google.com/appengine/docs/python/ndb/cache#incontext

GAE Datastore ID

I've created two different Entities, one a User and one a Message they can create. I assign each user an ID and then want to assign this ID to each message which that user creates. How can I go about this? Do I have to do it in a query?
Thanks
Assuming that you are using Python NDB, you can having something like the following:
class User(ndb.Model):
# put your fileds here
class Message(ndb.Model):
owner = ndb.KeyProperty()
# other fields
Create and save a User:
user = User(field1=value1, ....)
user.put()
Create and save a Message:
message = Message(owner=user.key, ...)
message.put()
Query a message based on user:
messages = Message.query().filter(Message.owner==user.key).fetch() # returns a list of messages that have this owner
For more information about NDB, take a look at Python NDB API.
Also, you should take a look at Python Datastore in order to get a better understanding of data modeling in App Engine.

GQL Query Not Returning Results on StringProperty Query Test for Equality

class MyEntity(db.Model):
timestamp = db.DateTimeProperty()
title = db.StringProperty()
number = db.FloatProperty()
db.GqlQuery("SELECT * FROM MyEntity WHERE title = 'mystring' AND timestamp >= date('2012-01-01') AND timestamp <= date('2012-12-31') ORDER BY timestamp DESC").fetch(1000)
This should fetch ~600 entities on app engine. On my dev server it behaves as expected, builds the index.yaml, I upload it, test on server but on app engine it does not return anything.
Index:
- kind: MyEntity
properties:
- name: title
- name: timestamp
direction: desc
I try splitting the query down on datastore viewer to see where the issue is and the timestamp constraints work as expected. The query returns nothing on WHERE title = 'mystring' when it should be returning a bunch of entities.
I vaguely remember fussy filtering where you had to call .filter("prop =",propValue) with the space between property and operator, but this is a GqlQuery so it's not that (and I tried that format with the GQL too).
Anyone know what my issue is?
One thing I can think of:
I added the list of MyEntity entities into the app via BulkLoader.py prior to the new index being created on my devserver & uploaded. Would that make a difference?
The last line you wrote is probably the problem.
Your entities in the actual real datastore are missing the index required for the query.
As far as I know, when you add a new index, App Engine is supposed to rebuild your indexes for you. This may take some time. You can check your admin page to check the state of your indexes and see if it's still building.
Turns out there's a slight bug in the bulkloader supplied with App Engine SDK - basically autogenerated config transforms strings as db.Text, which is no good if you want these fields indexed. The correct import_transform directive should be:
transform.none_if_empty(str)
This will instruct App Engine to index the uploaded field as a db.StringProperty().

Resources