google cloud endpoint method with multiple response message - google-app-engine

I have a google could enpoint method that I need to be able to return either a MaleResponseMessage or a FemaleResponseMessage. Is there a way to specify that such as with
#endpoints.method(message_types.VoidMessage, [MaleResponseMessage, FemaleResponseMessage])
There is of course the option of declaring a super message class, say, PersonResponseMessage to wrap either MaleResponseMessage or FemaleResponseMessage. But is there something similar to the snippet above?
EDIT:
Trying to implement my own proposal, I got stuck. The only thing the two message types have in common is the request: the exact same request fields (with an additional boolean female=true/false) for PersonRequest. The MaleResponseMessage and the FemaleResponseMessage have no field in common. So I am using one endpoint method, as #bossylobster shows, where I check
if request.female : # request.female == True
return get_female(etc, etc)
else: # request.female == False // implies male
return get_male(etc,etc)
For the response, I need something like
class PersonResponse(messages.Message):
if ??? :
item = messages.MessageField(MaleResponseMessage,1)
else:
item = messages.MessageField(FemaleResponseMessage,1)
I am not sure what to check ??? for. First, I thought about isinstance or type. But how would I do that? Would the below work?
class PersonResponse(messages.Message):
if type(Message()) == MaleResponseMessage :
item = messages.MessageField(MaleResponseMessage,1)
else:
item = messages.MessageField(FemaleResponseMessage,1)

Unfortunately no. You can have only one response and one request schema; this is because they are registered with Google's API infrastructure and having a strict schema is what provides the speed and efficiency of requests.
Your best bet would be to combine the fields needed for each male and female into a single model class and do your own validation.
A possible solution could look like
from protorpc import messages
class Gender(messages.Enum):
MALE = 0
FEMALE = 1
class GenderRequest(messages.Enum):
gender = messages.EnumField(Gender, 1, required=True)
class PersonResponse(messages.Message):
gender = messages.EnumField(Gender, 1)
# shared fields
# female specific fields
# male specific fields
and then in your actual method
#endpoints.method(GenderRequest, PersonMessage, ...)
def my_method(self, request):
if request.gender == Gender.MALE:
return male_response(request)
elif request.gender == Gender.FEMALE:
return female_response(request)
else:
# This should never occur since gender is required
raise endpoints.BadRequestException('Gender not set.')
where male_response and female_response are methods which create instances of PersonMessage corresponding to male and female.

You could use two different endpoint methods.

Related

Branching Workflows based on value of specified Page field

I have a DailyReflectionPage Model with a reflection_date field that forms the basis for the Page's slug, which is in the form YYYY-MM-DD. Here's an extract of my Page model:
class DailyReflectionPage(Page):
"""
The Daily Reflection Model
"""
...
...
reflection_date = models.DateField("Reflection Date", max_length=254)
...
...
#cached_property
def date(self):
"""
Returns the Reflection's date as a string in %Y-%m-%d format
"""
fmt = "%Y-%m-%d"
date_as_string = (self.reflection_date).strftime(fmt)
return date_as_string
...
...
def full_clean(self, *args, **kwargs):
# first call the built-in cleanups (including default slug generation)
super(DailyReflectionPage, self).full_clean(*args, **kwargs)
# now make your additional modifications
if self.slug is not self.date:
self.slug = self.date
...
...
These daily reflections are written by different authors, as part of a booklet that is published towards the end of the year, for use in the coming year. I would like to have a workflow where, for instance, the daily reflections from January to June are reviewed by one group, and those from July to December are reviewed by another group, as illustrated in the diagram below:
How can this be achieved?
This should be able to be achieved by creating ONE new Workflow Task type that has a relationship to two sets of User Groups (e.g. a/b or before/after, it is probably best to keep this generic in the model definition).
This new Task can be created as part of a new Workflow within the Wagtail admin, and each of the groups linked to the Moderator Group 1 / 2.
Wagtail's methods on the Task allow you to return approval options based on the Page model for any created workflow, from here you can look for a method that would be on the class and assign the groups from there.
The benefits of having a bit more of a generic approach is that you could leverage this for any splitting of moderator assignments as part of future Workflow tasks.
Implementation Overview
1 - read the Wagatail Docs on how to add a new Task Type and the Task model reference to understand this process.
2 - Read through the full implementation in the code of the built in GroupApprovalTask.
3 - In the GroupApprovalTask you can see that the methods with overrides all rely on the checking of self.groups but they all get the page passed in as a arg to those methods.
4 - Create a new Task that extends the Wagtail Task class and on this model create two ManyToManyField that allow for two sets of user groups being linked (note: you do not have do to this as two fields, you could put a model in the middle but the example below is just the simplest way to get to the gaol).
5 - On the DailyReflectionPage model create a method get_approval_group_key which will return maybe a simple Boolean or a 'A' or 'B' based on the business requirements you described above (check the model's date etc)
6 - In your custom Task create a method that abstracts the checking of the Page for this method and returns the Tasks' user group. You may want to add some error handling and default values. E.g. get_approval_groups
7 - Add a custom method for each of the 'start', 'user_can_access_editor', page_locked_for_user, user_can_lock, user_can_unlock, get_task_states_user_can_moderate methods that calls get_approval_group with the page and returns the values (see the code GroupApprovalTask for what these should do.
Example Code Snippets
models.py
class DailyReflectionPage(Page):
"""
The Daily Reflection Model
"""
def get_approval_group_key(self):
# custom logic here that checks all the date stuff
if date_is_after_foo:
return 'A'
return 'B'
class SplitGroupApprovalTask(Task):
## note: this is the simplest approach, two fields of linked groups, you could further refine this approach as needed.
groups_a = models.ManyToManyField(
Group,
help_text="Pages at this step in a workflow will be moderated or approved by these groups of users",
related_name="split_task_group_a",
)
groups_b = models.ManyToManyField(
Group,
help_text="Pages at this step in a workflow will be moderated or approved by these groups of users",
related_name="split_task_group_b",
)
admin_form_fields = Task.admin_form_fields + ["groups_a", "groups_b"]
admin_form_widgets = {
"groups_a": forms.CheckboxSelectMultiple,
"groups_b": forms.CheckboxSelectMultiple,
}
def get_approval_groups(self, page):
"""This method gets used by all checks when determining what group to allow/assign this Task to"""
# recommend some checks here, what if `get_approval_group` is not on the Page?
approval_group = page.specific.get_approval_group_key()
if (approval_group == 'A'):
return self.group_a
return self.group_b
# each of the following methods will need to be implemented, all checking for the correct groups for the Page when called
# def start(self, ...etc)
# def user_can_access_editor(self, ...etc)
# def page_locked_for_user(self, ...etc)
# def user_can_lock(self, ...etc)
# def user_can_unlock(self, ...etc)
def get_task_states_user_can_moderate(self, user, **kwargs):
# Note: this has not been tested, however as this method does not get `page` we must find all the tasks allowed indirectly via their TaskState pages
tasks = TaskState.objects.filter(status=TaskState.STATUS_IN_PROGRESS, task=self.task_ptr)
filtered_tasks = []
for task in tasks:
page = task.select_related('page_revision', 'task', 'page_revision__page')
groups = self.get_approval_groups(page)
if groups.filter(id__in=user.groups.all()).exists() or user.is_superuser:
filtered_tasks.append(task)
return TaskState.objects.filter(pk__in=[task.pk for task in filtered_tasks])
def get_actions(self, page, user):
# essentially a copy of this method on `GroupApprovalTask` but with the ability to have a dynamic 'group' returned.
approval_groups = self.get_approval_groups(page)
if approval_groups.filter(id__in=user.groups.all()).exists() or user.is_superuser:
return [
('reject', "Request changes", True),
('approve', "Approve", False),
('approve', "Approve with comment", True),
]
return super().get_actions(page, user)

Is it possible to set two fields as indexes on an entity in ndb?

I am new to ndb and gae and have a problem coming up with a good solution setting indexes.
Let say we have a user model like this:
class User(ndb.Model):
name = ndb.StringProperty()
email = ndb.StringProperty(required = True)
fb_id = ndb.StringProperty()
Upon login if I was going to check against the email address with a query, I believe this would be quite slow and inefficient. Possibly it has to do a full table scan.
q = User.query(User.email == EMAIL)
user = q.fetch(1)
I believe it would be much faster, if User models were saved with the email as their key.
user = user(id=EMAIL)
user.put()
That way I could retrieve them like this a lot faster (so I believe)
key = ndb.Key('User', EMAIL)
user = key.get()
So far if I am wrong please correct me. But after implementing this I realized there is a chance that facebook users would change their email address, that way upon a new oauth2.0 connection their new email can't be recognized in the system and they will be created as a new user. Hence maybe I should use a different approach:
Using the social-media-provider-id (unique for all provider users)
and
provider-name (in rare case that two twitter and facebook users share
the same provider-id)
However in order to achieve this, I needed to set two indexes, which I believe is not possible.
So what could I do? Shall I concatenate both fields as a single key and index on that?
e.g. the new idea would be:
class User(ndb.Model):
name = ndb.StringProperty()
email = ndb.StringProperty(required = True)
provider_id = ndb.StringProperty()
provider_type = ndb.StringProperty()
saving:
provider_id = 1234
provider_type = fb
user = user(id=provider_id + provider_type)
user.put()
retrieval:
provider_id = 1234
provider_type = fb
key = ndb.Key('User', provider_id + provider_type)
user = key.get()
This way we don't care any more if the user changes the email address on his social media.
Is this idea sound?
Thanks,
UPDATE
Tim's solution sounded so far the cleanest and likely also the fastest to me. But I came across a problem.
class AuthProvider(polymodel.PolyModel):
user_key = ndb.KeyProperty(kind=User)
active = ndb.BooleanProperty(default=True)
date_created = ndb.DateTimeProperty(auto_now_add=True)
#property
def user(self):
return self.user_key.get()
class FacebookLogin(AuthProvider):
pass
View.py: Within facebook_callback method
provider = ndb.Key('FacebookLogin', fb_id).get()
# Problem is right here. provider is always None. Only if I used the PolyModel like this:
# ndb.Key('AuthProvider', fb_id).get()
#But this defeats the whole purpose of having different sub classes as different providers.
#Maybe I am using the key handeling wrong?
if provider:
user = provider.user
else:
provider = FacebookLogin(id=fb_id)
if not user:
user = User()
user_key = user.put()
provider.user_key = user_key
provider.put()
return user
One slight variation on your approach which could allow a more flexible model will be to create a separate entity for the provider_id, provider_type, as the key or any other auth scheme you come up with
This entity then holds a reference (key) of the actual user details.
You can then
do a direct get() for the auth details, then get() the actual user details.
The auth details can be changed without actually rewriting/rekeying the user details
You can support multiple auth schemes for a single user.
I use this approach for an application that has > 2000 users, most use a custom auth scheme (app specific userid/passwd) or google account.
e.g
class AuthLogin(ndb.Polymodel):
user_key = ndb.KeyProperty(kind=User)
status = ndb.StringProperty() # maybe you need to disable a particular login with out deleting it.
date_created = ndb.DatetimeProperty(auto_now_add=True)
#property
def user(self):
return self.user_key.get()
class FacebookLogin(AuthLogin):
# some additional facebook properties
class TwitterLogin(AuthLogin):
# Some additional twitter specific properties
etc...
By using PolyModel as the base class you can do a AuthLogin.query().filter(AuthLogin.user_key == user.key) and get all auth types defined for that user as they all share the same base class AuthLogin. You need this otherwise you would have to query in turn for each supported auth type, as you can not do a kindless query without an ancestor, and in this case we can't use the User as the ancestor becuase then we couldn't do a simple get() to from the login id.
However some things to note, all subclasses of AuthLogin will share the same kind in the key "AuthLogin" so you still need to concatenate the auth_provider and auth_type for the keys id so that you can ensure you have unique keys. E.g.
dev~fish-and-lily> from google.appengine.ext.ndb.polymodel import PolyModel
dev~fish-and-lily> class X(PolyModel):
... pass
...
dev~fish-and-lily> class Y(X):
... pass
...
dev~fish-and-lily> class Z(X):
... pass
...
dev~fish-and-lily> y = Y(id="abc")
dev~fish-and-lily> y.put()
Key('X', 'abc')
dev~fish-and-lily> z = Z(id="abc")
dev~fish-and-lily> z.put()
Key('X', 'abc')
dev~fish-and-lily> y.key.get()
Z(key=Key('X', 'abc'), class_=[u'X', u'Z'])
dev~fish-and-lily> z.key.get()
Z(key=Key('X', 'abc'), class_=[u'X', u'Z'])
This is the problem you ran into. By adding the provider type as part of the key you now get distinct keys.
dev~fish-and-lily> z = Z(id="Zabc")
dev~fish-and-lily> z.put()
Key('X', 'Zabc')
dev~fish-and-lily> y = Y(id="Yabc")
dev~fish-and-lily> y.put()
Key('X', 'Yabc')
dev~fish-and-lily> y.key.get()
Y(key=Key('X', 'Yabc'), class_=[u'X', u'Y'])
dev~fish-and-lily> z.key.get()
Z(key=Key('X', 'Zabc'), class_=[u'X', u'Z'])
dev~fish-and-lily>
I don't believe this is any less convenient a model for you.
Does all that make sense ;-)
While #Greg's answer seems OK, I think it's actually a bad idea to associate an external type/id as a key for your entity, because this solution doesn't scale very well.
What if you would like to implement your own username/password at one point?
What if the user going to delete their Facebook account?
What if the same user wants to sign in with a Twitter account as well?
What if the user has more than one Facebook accounts?
So the idea of having the type/id as key looks weak. A better solution would be to have a field for every type to store only the id. For example facebook_id, twitter_id, google_id etc, then query on these fields to retrieve the actual user. This will happen during sign-in and signup process so it's not that often. Of course you will have to add some logic to add another provider for an already existed user or merge users if the same user signed in with a different provider.
Still the last solution won't work if you want to support multiple sign-ins from the same provider. In order to achieve that you will have to create another model that will store only the external providers/ids and associate them with your user model.
As an example of the second solution you could check my gae-init project where I'm storing the 3 different providers in the User model and working on them in the auth.py module. Again this solution doesn't not scale very well with more providers and doesn't support multiple IDs from the same provider.
Concatenating the user-type with their ID is sensible.
You can save on your read and write costs by not duplicating the type and ID as properties though - when you need to use them, just split the ID back up. (Doing this will be simpler if you include a separator between the parts, '%s|%s' % (provider_type, provider_id) for example)
If you want to use a single model, you can do something like:
class User(ndb.Model):
name = ndb.StringProperty()
email = ndb.StringProperty(required = True)
providers = ndb.KeyProperty(repeated=True)
auser = User(id="auser", name="A user", email="auser#example.com")
auser.providers = [
ndb.Key("ProviderName", "fb", "ProviderId", 123),
ndb.Key("ProviderName", "tw", "ProviderId", 123)
]
auser.put()
To query for a specific FB login, you simple do:
fbkey = ndb.Key("ProviderName", "fb", "ProviderId", 123)
for entry in User.query(User.providers==fbkey):
# Do something with the entry
As ndb does not provide an easy way to create a unique constraint, you could use the _pre_put_hook to ensure that providers is unique.

Django Models: How to setup these DB constraints on the fields?

Suppose I have the following Model:
class myClassObj(models.Model):
flag1 = models.NullBooleanField()
flag2 = models.BooleanField()
Now also suppose I want the Database to enforce the following constraint:
flag1 should be None if and only if flag2 is false
How can I write the constraints in this model so that this condition is checked any time a myClassObj is created or edited? I see some interesting information here. But I don't see how to specify an "iff" constraint as I described above.
The Django documentation recommends doing custom validation where access to multiple fields is required by overriding Model.clean().
This example from the documentation show how it's possible to validate that a news article still in the "draft" phase does not have a publication date.
def clean(self):
import datetime
from django.core.exceptions import ValidationError
# Don't allow draft entries to have a pub_date.
if self.status == 'draft' and self.pub_date is not None:
raise ValidationError('Draft entries may not have a publication date.')
# Set the pub_date for published items if it hasn't been set already.
if self.status == 'published' and self.pub_date is None:
self.pub_date = datetime.date.today()
For more detailed information, see the full reference here: https://docs.djangoproject.com/en/dev/ref/models/instances/#validating-objects
To have this called every time you save the object you'll also need to override the save method: https://docs.djangoproject.com/en/dev/topics/db/models/#overriding-model-methods.
Another useful reference for other use cases if you only need to validate a single field is writing custom validators: https://docs.djangoproject.com/en/dev/ref/validators/

How can I mimic 'select_related' using google-appengine and django-nonrel?

django nonrel's documentation states: "you have to manually write code for merging the results of multiple queries (JOINs, select_related(), etc.)".
Can someone point me to any snippets that manually add the related data? #nickjohnson has an excellent post showing how to do this with the straight AppEngine models, but I'm using django-nonrel.
For my particular use I'm trying to get the UserProfiles with their related User models. This should be just two simple queries, then match the data.
However, using django-nonrel, a new query gets fired off for each result in the queryset. How can I get access to the related items in a 'select_related' sort of way?
I've tried this, but it doesn't seem to work as I'd expect. Looking at the rpc stats, it still seems to be firing a query for each item displayed.
all_profiles = UserProfile.objects.all()
user_pks = set()
for profile in all_profiles:
user_pks.add(profile.user_id) # a way to access the pk without triggering the query
users = User.objects.filter(pk__in=user_pks)
for profile in all_profiles:
profile.user = get_matching_model(profile.user_id, users)
def get_matching_model(key, queryset):
"""Generator expression to get the next match for a given key"""
try:
return (model for model in queryset if model.pk == key).next()
except StopIteration:
return None
UPDATE:
Ick... I figured out what my issue was.
I was trying to improve the efficiency of the changelist_view in the django admin. It seemed that the select_related logic above was still producing additional queries for each row in the results set when a foreign key was in my 'display_list'. However, I traced it down to something different. The above logic does not produce multiple queries (but if you more closely mimic Nick Johnson's way it will look a lot prettier).
The issue is that in django.contrib.admin.views.main on line 117 inside the ChangeList method there is the following code: result_list = self.query_set._clone(). So, even though I was properly overriding the queryset in the admin and selecting the related stuff, this method was triggering a clone of the queryset which does NOT keep the attributes on the model that I had added for my 'select related', resulting in an even more inefficient page load than when I started.
Not sure what to do about it yet, but the code that selects related stuff is just fine.
I don't like answering my own question, but the answer might help others.
Here is my solution that will get related items on a queryset based entirely on Nick Johnson's solution linked above.
from collections import defaultdict
def get_with_related(queryset, *attrs):
"""
Adds related attributes to a queryset in a more efficient way
than simply triggering the new query on access at runtime.
attrs must be valid either foreign keys or one to one fields on the queryset model
"""
# Makes a list of the entity and related attribute to grab for all possibilities
fields = [(model, attr) for model in queryset for attr in attrs]
# we'll need to make one query for each related attribute because
# I don't know how to get everything at once. So, we make a list
# of the attribute to fetch and pks to fetch.
ref_keys = defaultdict(list)
for model, attr in fields:
ref_keys[attr].append(get_value_for_datastore(model, attr))
# now make the actual queries for each attribute and store the results
# in a dict of {pk: model} for easy matching later
ref_models = {}
for attr, pk_vals in ref_keys.items():
related_queryset = queryset.model._meta.get_field(attr).rel.to.objects.filter(pk__in=set(pk_vals))
ref_models[attr] = dict((x.pk, x) for x in related_queryset)
# Finally put related items on their models
for model, attr in fields:
setattr(model, attr, ref_models[attr].get(get_value_for_datastore(model, attr)))
return queryset
def get_value_for_datastore(model, attr):
"""
Django's foreign key fields all have attributes 'field_id' where
you can access the pk of the related field without grabbing the
actual value.
"""
return getattr(model, attr + '_id')
To be able to modify the queryset on the admin to make use of the select related we have to jump through a couple hoops. Here is what I've done. The only thing changed on the 'get_results' method of the 'AppEngineRelatedChangeList' is that I removed the self.query_set._clone() and just used self.query_set instead.
class UserProfileAdmin(admin.ModelAdmin):
list_display = ('username', 'user', 'paid')
select_related_fields = ['user']
def get_changelist(self, request, **kwargs):
return AppEngineRelatedChangeList
class AppEngineRelatedChangeList(ChangeList):
def get_query_set(self):
qs = super(AppEngineRelatedChangeList, self).get_query_set()
related_fields = getattr(self.model_admin, 'select_related_fields', [])
return get_with_related(qs, *related_fields)
def get_results(self, request):
paginator = self.model_admin.get_paginator(request, self.query_set, self.list_per_page)
# Get the number of objects, with admin filters applied.
result_count = paginator.count
# Get the total number of objects, with no admin filters applied.
# Perform a slight optimization: Check to see whether any filters were
# given. If not, use paginator.hits to calculate the number of objects,
# because we've already done paginator.hits and the value is cached.
if not self.query_set.query.where:
full_result_count = result_count
else:
full_result_count = self.root_query_set.count()
can_show_all = result_count self.list_per_page
# Get the list of objects to display on this page.
if (self.show_all and can_show_all) or not multi_page:
result_list = self.query_set
else:
try:
result_list = paginator.page(self.page_num+1).object_list
except InvalidPage:
raise IncorrectLookupParameters
self.result_count = result_count
self.full_result_count = full_result_count
self.result_list = result_list
self.can_show_all = can_show_all
self.multi_page = multi_page
self.paginator = paginator

Use a db.StringProperty() as unique identifier in Google App Engine

I just have a hunch about this. But if feels like I'm doing it the wrong way. What I want to do is to have a db.StringProperty() as a unique identifier. I have a simple db.Model, with property name and file. If I add another entry with the same "name" as one already in the db.Model I want to update this.
As of know I look it up with:
template = Templates.all().filter('name = ', name)
Check if it's one entry already:
if template.count() > 0:
Then add it or update it. But from what I've read .count() is every expensive in CPU usage.
Is there away to set the "name" property to be unique and the datastore will automatic update it or another better way to do this?
..fredrik
You can't make a property unique in the App Engine datastore. What you can do instead is to specify a key name for your model, which is guaranteed to be unique - see the docs for details.
I was having the same problem and came up with the following answer as the simplest one :
class Car(db.Model):
name = db.StringProperty(required=True)
def __init__(self,*args, **kwargs):
super(Car, self).__init__(*args, **kwargs)
loadingAnExistingCar = ("key" in kwargs.keys() or "key_name" in kwargs.keys())
if not loadingAnExistingCar:
self.__makeSureTheCarsNameIsUnique(kwargs['name'])
def __makeSureTheCarsNameIsUnique(self, name):
existingCarWithTheSameName = Car.GetByName(name)
if existingCarWithTheSameName:
raise UniqueConstraintValidationException("Car should be unique by name")
#staticmethod
def GetByName(name):
return Car.all().filter("name", name).get()
It's important to not that I first check if we are loading an existing entity first.
For the complete solution : http://nicholaslemay.blogspot.com/2010/07/app-engine-unique-constraint.html
You can just try to get your entity and edit it, and if not found create a new one:
template = Templates.gql('WHERE name = :1', name)
if template is None:
template = Templates()
# do your thing to set the entity's properties
template.put()
That way it will insert a new entry when it wasn't found, and if it was found it will update the existing entry with the changes you made (see documentation here).
An alternative solution is to create a model to store the unique values, and store it transationally using a combination of Model.property_name.value as key. Only if that value is created you save your actual model. This solution is described (with code) here:
http://squeeville.com/2009/01/30/add-a-unique-constraint-to-google-app-engine/
I agree with Nick. But, if you do ever want to check for model/entity existence based on a property, the get() method is handy:
template = Templates.all().filter('name = ', name).get()
if template is None:
# doesn't exist
else:
# exists
I wrote some code to do this. The idea for it is to be pretty easy to use. So you can do this:
if register_property_value('User', 'username', 'sexy_bbw_vixen'):
return 'Successfully registered sexy_bbw_vixen as your username!'
else:
return 'The username sexy_bbw_vixen is already in use.'
This is the code. There are a lot of comments, but its actually only a few lines:
# This entity type is a registry. It doesn't hold any data, but
# each entity is keyed to an Entity_type-Property_name-Property-value
# this allows for a transaction to 'register' a property value. It returns
# 'False' if the property value is already in use, and thus cannot be used
# again. Or 'True' if the property value was not in use and was successfully
# 'registered'
class M_Property_Value_Register(db.Expando):
pass
# This is the transaction. It returns 'False' if the value is already
# in use, or 'True' if the property value was successfully registered.
def _register_property_value_txn(in_key_name):
entity = M_Property_Value_Register.get_by_key_name(in_key_name)
if entity is not None:
return False
entity = M_Property_Value_Register(key_name=in_key_name)
entity.put()
return True
# This is the function that is called by your code, it constructs a key value
# from your Model-Property-Property-value trio and then runs a transaction
# that attempts to register the new property value. It returns 'True' if the
# value was successfully registered. Or 'False' if the value was already in use.
def register_property_value(model_name, property_name, property_value):
key_name = model_name + '_' + property_name + '_' + property_value
return db.run_in_transaction(_register_property_value_txn, key_name )

Resources