SQLAlchemy: foreignKeys from multiple Tables (Many-to-Many) - database

I'm using flask-sqlalchemy orm in my flask app which is about smarthome sensors and actors (for the sake of simplicity let's call them Nodes.
Now I want to store an Event which is bound to Nodes in order to check their state and other or same Nodes which should be set with a given value if the state of the first ones have reached a threshold.
Additionally the states could be checked or set from/for Groups or Scenes. So I have three diffrent foreignkeys to check and another three to set. All of them could be more than one per type and multiple types per Event.
Here is an example code with the db.Models and pseudocode what I expect to get stored in an Event:
db = SQLAlchemy()
class Node(db.Model):
id = db.Column(db.Integer, primary_key=True)
value = db.Column(db.String(20))
# columns snipped out
class Group(db.Model):
id = db.Column(db.Integer, primary_key=True)
value = db.Column(db.String(20))
# columns snipped out
class Scene(db.Model):
id = db.Column(db.Integer, primary_key=True)
value = db.Column(db.String(20))
# columns snipped out
class Event(db.Model):
id = db.Column(db.Integer, primary_key=True)
# The following columns may be in a intermediate table
# but I have no clue how to design that under these conditions
constraints = # list of foreignkeys from diffrent tables (Node/Group/Scene)
# with threshold per key
target = # list of foreignkeys from diffrent tables (Node/Group/Scene)
# with target values per key
In the end I want to be able to check if any of my Events are true to set the bound Node/Group/Scene accordingly.
It may be a database design problem (and not sqlalchemy) but I want to make use of the advantages of sqla orm here.
Inspired by this and that answer I tried to dig deeper, but other questions on SO were about more specific problems or one-to-many relationships.
Any hints or design tips are much appreciated. Thanks!

I ended up with a trade-off between usage and lines of code. My first thought here was to save as much code as I can (DRY) and defining as less tables as possible.
As SQLAlchemy itself points out in one of their examples the "generic foreign key" is just supported because it was often requested, not because it is a good solution. With that less db functionallaty is used and instead the application has to take care about key constraints.
On the other hand they said, having more tables in your database does not affected db performance.
So I tried some approaches and find a good one that fits to my usecase. Instead of a "normal" intermediate table for many-to-many relationships I use another SQLAlchemy class which has two one-to-many relations on both sides to connect two tables.
class Event(db.Model):
id = db.Column(db.Integer, primary_key=True)
noodles = db.relationship('NoodleEvent', back_populates='events')
# columns snipped out
def get_as_dict(self):
return {
"id": self.id,
"nodes": [n.get_as_dict() for n in self.nodes]
}
class Node(db.Model):
id = db.Column(db.Integer, primary_key=True)
value = db.Column(db.String(20))
events = db.relationship('NodeEvent', back_populates='node')
# columns snipped out
class NodeEvent(db.Model):
ev_id = db.Column('ev_id', db.Integer, db.ForeignKey('event.id'), primary_key=True)
n_id = db.Column('n_id', db.Integer, db.ForeignKey('node.id'), primary_key=True)
value = db.Column('value', db.String(200), nullable=False)
compare = db.Column('compare', db.String(20), nullable=True)
node = db.relationship('Node', back_populates="events")
events = db.relationship('Event', back_populates="nodes")
def get_as_dict(self):
return {
"trigger_value": self.value,
"actual_value": self.node.status,
"compare": self.compare
}
The trade-off is that I have to define a new class everytime I bind a new table on that relationship. But with the "generic foreign key" approach I also would have to check from where the ForeignKey is comming from. Same work in the end of the day.
With my get_as_dict() function I have a very handy access to the related data.

Related

Tracking item order for storage to and retrieval from a DB

I'm trying to figure out how I'm going to 'CRUD' the order of items I have in a group that I'm storing in a database. (Pseudo statement of: select * items from app where group_id = 1;)
My guess is that I just use an numeric field and just increase/decrease the number as more items are added to/removed from the group. I can then just update the items number in this field as they are moved around. However, I've seen this go really badly wrong in an old legacy app where items would get out of sync and you'd have a group where the order ended up something like this:
0,1,1,3,4,5
0,1,1,1,4,5
This wasn't handled very gracefully by the application either, and broke the application necessitating manual intervention to reorder the items in the DB.
Is there a way to avoid this pitfall?
EDIT: I would also maybe want the items available in multiple groups with multiple orders.
I think in that case I would need a many to many relationship for both the group to item relationship and the item to order relationship. /EDIT
I'll be doing this in the Django framework.
I'm not really sure what you are asking; because ordering is one thing, and grouping of related objects is something else entirely.
Databases don't store the order of things, but rather the relationships (grouping) of things. The order of things is a user interface detail and not something that a database should be used for.
In django, you can create a ManyToMany relationship. This essentially creates a "box" where you can add and remove items that are related to a particular model. Here is the example from the documentation:
from django.db import models
class Publication(models.Model):
title = models.CharField(max_length=30)
# On Python 3: def __str__(self):
def __unicode__(self):
return self.title
class Meta:
ordering = ('title',)
class Article(models.Model):
headline = models.CharField(max_length=100)
publications = models.ManyToManyField(Publication)
# On Python 3: def __str__(self):
def __unicode__(self):
return self.headline
class Meta:
ordering = ('headline',)
Here an Article can belong to many Publications, and Publications have one or more Articles associated with them:
a = Article.create(headline='Hello')
b = Article.create(headline='World')
p = Publication.create(title='My Publication')
p.article_set.add(a)
p.article_set.add(b)
p.save()
# You can also add an article to a publication from the article object:
c = Article.create(headline='The Answer is 42')
c.publications.add(p)
To know how many articles belong to a publication:
Publication.objects.get(title='My Publication').article_set.count()

Maintain uniqueness of a property in the NDB database

An NDB model contains two properties: email and password. How to avoid adding to the database two records with the same email? NDB doesn't have UNIQUE option for a property, like relational databases do.
Checking that new email is not in the database before adding—won't satisfy me, because two parallel processes can both simultaneously do the checking and each add the same email.
I'm not sure that transactions can help here, I am under this impression after reading some of the manuals. Maybe the synchronous transactions? Does it mean one at a time?
Create the key of the entity by email, then use get_or_insert to check if exists.
Also read about keys , entities. and models
#ADD
key_a = ndb.Key(Person, email);
person = Person(key=key_a)
person.put()
#Insert unique
a = Person.get_or_insert(email)
or if you want to just check
#ADD
key_a = ndb.Key(Person, email);
person = Person(key=key_a)
person.put()
#Check if it's added
new_key_a =ndb.Key(Person, email);
a = new_key_a.get()
if a is not None:
return
Take care. Changing email will be really difficult (need to create new entry and copy all entries to new parent).
For that thing maybe you need to store the email, in another entity and have the User be the parent of that.
Another way is to use Transactions and check the email property. Transaction's work in the way: First that commits is the First that wins. A concept which means that if 2 users check for email only the first (lucky) one will succeed, thus your data will be consistent.
Maybe you are looking for the webapp2-authentication module, that can handle this for you. It can be imported like this import webapp2_extras.appengine.auth.models. Look here for a complete example.
I also ran into this problem, and the solution above didn't solve my problem:
making it a key was unacceptable in my case (i need the property to be changeable in the future)
using transactions on the email property doesn't work AFAIK (you can't do queries on non-key names inside transactions, so you can't check whether the e-mail already exists).
I ended up creating a separate model with no properties, and the unique property (email address) as the key name. In the main model, I store a reference to the email model (instead of storing the email as a string). Then, I can make 'change_email' a transaction that checks for uniqueness by looking up the email by key.
This is something that I've come across as well and I settled on a variation of #Remko's solution. My main issue with checking for an existing entity with the given email is a potential race condition like op stated. I added a separate model that uses an email address as the key and has a property that holds a token. By using get_or_insert, the returned entities token can be checked against the token passed in and if they match then the model was inserted.
import os
from google.appengine.ext import ndb
class UniqueEmail(ndb.Model):
token = ndb.StringProperty()
class User(ndb.Model):
email = ndb.KeyProperty(kind=UniqueEmail, required=True)
password = ndb.StringProperty(required=True)
def create_user(email, password):
token = os.urandom(24)
unique_email = UniqueEmail.get_or_insert(email,
token=token)
if token == unique_email.token:
# If the tokens match, that means a UniqueEmail entity
# was inserted by this process.
# Code to create User goes here.
# The tokens do not match, therefore the UniqueEmail entity
# was retrieved, so the email is already in use.
raise ValueError('That user already exists.')
I implemented a generic structure to control unique properties. This solution can be used for several kinds and properties. Besides, this solution is transparent for other developers, they use NDB methods put and delete as usual.
1) Kind UniqueCategory: a list of unique properties in order to group information. Example:
‘User.nickname’
2) Kind Unique: it contains the values of each unique property. The key is the own property value which you want to control of. I save the urlsafe of the main entity instead of the key or key.id() because is more practical and it doesn’t have problem with parent and it can be used for different kinds. Example:
parent: User.nickname
key: AVILLA
reference_urlsafe: ahdkZXZ-c3RhcnQtb3BlcmF0aW9uLWRldnINCxIEVXNlciIDMTIzDA (User key)
3) Kind User: for instance, I want to control unique values for email and nickname. I created a list called ‘uniqueness’ with the unique properties. I overwritten method put in transactional mode and I wrote the hook _post_delete_hook when one entity is deleted.
4) Exception ENotUniqueException: custom exception class raised when some value is duplicated.
5) Procedure check_uniqueness: check whether a value is duplicated.
6) Procedure delete_uniqueness: delete unique values when the main entity is deleted.
Any tips or improvement are welcome.
class UniqueCategory(ndb.Model):
# Key = [kind name].[property name]
class Unique(ndb.Model):
# Parent = UniqueCategory
# Key = property value
reference_urlsafe = ndb.StringProperty(required=True)
class ENotUniqueException(Exception):
def __init__(self, property_name):
super(ENotUniqueException, self).__init__('Property value {0} is duplicated'.format(property_name))
self. property_name = property_name
class User(ndb.Model):
# Key = Firebase UUID or automatically generated
firstName = ndb.StringProperty(required=True)
surname = ndb.StringProperty(required=True)
nickname = ndb.StringProperty(required=True)
email = ndb.StringProperty(required=True)
#ndb.transactional(xg=True)
def put(self):
result = super(User, self).put()
check_uniqueness (self)
return result
#classmethod
def _post_delete_hook(cls, key, future):
delete_uniqueness(key)
uniqueness = [nickname, email]
def check_uniqueness(entity):
def get_or_insert_unique_category(qualified_name):
unique_category_key = ndb.Key(UniqueCategory, qualified_name)
unique_category = unique_category_key.get()
if not unique_category:
unique_category = UniqueCategory(id=qualified_name)
unique_category.put()
return unique_category_key
def del_old_value(key, attribute_name, unique_category_key):
old_entity = key.get()
if old_entity:
old_value = getattr(old_entity, attribute_name)
if old_value != new_value:
unique_key = ndb.Key(Unique, old_value, parent=unique_category_key)
unique_key.delete()
# Main flow
for unique_attribute in entity.uniqueness:
attribute_name = unique_attribute._name
qualified_name = type(entity).__name__ + '.' + attribute_name
new_value = getattr(entity, attribute_name)
unique_category_key = get_or_insert_unique_category(qualified_name)
del_old_value(entity.key, attribute_name, unique_category_key)
unique = ndb.Key(Unique, new_value, parent=unique_category_key).get()
if unique is not None and unique.reference_urlsafe != entity.key.urlsafe():
raise ENotUniqueException(attribute_name)
else:
unique = Unique(parent=unique_category_key,
id=new_value,
reference_urlsafe=entity.key.urlsafe())
unique.put()
def delete_uniqueness(key):
list_of_keys = Unique.query(Unique.reference_urlsafe == key.urlsafe()).fetch(keys_only=True)
if list_of_keys:
ndb.delete_multi(list_of_keys)

Google App Engine ndb performance on repeated property

Do I pay a penalty on query performance if I choose to query repeated property? For example:
class User(ndb.Model):
user_name = ndb.StringProperty()
login_providers = ndb.KeyProperty(repeated=true)
fbkey = ndb.Key("ProviderId", 1, "ProviderName", "FB")
for entry in User.query(User.login_providers == fbkey):
# Do something with entry.key
vs
class User(ndb.Model)
user_name = ndb.StringProperty()
class UserProvider(ndb.Model):
user_key = ndb.KeyProperty(kind=User)
login_provider = ndb.KeyProperty()
for entry in UserProvider.query(
UserProvider.user_key == auserkey,
UserProvider.login_provider == fbkey
):
# Do something with entry.user_key
Based on the documentation from GAE, it seems that Datastore take care of indexing and the first less verbose option would be using the index. However, I failed to find any documentation to confirm this.
Edit
The sole purpose of UserProvider in the second example is to create a one-to-many relationship between a user and it's login_provider. I wanted to understand if it worth the trouble of creating a second entity instead of querying on repeated property. Also, assume that all I need is the key from the User.
No. But you'll raise your write costs because each entry needs to be indexed, and write costs are based on the number of indexes updated.

Django Model ValueError

I'm accessing emails in my email server, taking the body of each email and then applying regular expressions to find the data necessary to populate my Django model.
This all works fine except for one field, which is linked as a foreign key to another model field. Despite the value in my email being the same as the one in listed in my other model, it fails....
The error:
ValueError: Cannot assign "'Humanities'": "Subject.faculty" must be a "Faculty" instance.
For example, say each school subject has to be part of a faculty. When populating the database via a form, for the Subject's faculty field I drop down the menu to a list of faculty values/instances as there is a foreign key relationship defined in my model i.e. for the faculty field I can choose from Humanities, Art, Design Technology etc.
But when I find the value 'Humanities' in my email and try to add it to the database model, I get the error above.
Anyone shed any light on this? Am I being stupid or is it more than a ValueError as to me, the values are the same in both cases
Thank you
More code as requested:
class Faculty(models.Model):
name = models.CharField(primary_key=True, max_length=50)
leader = models.CharField(max_length=50)
email = models.EmailField()
mailing_list = models.ManyToManyField("Contact", null=True)
class Subject(models.Model):
name = models.CharField(max_length=50)
faculty = models.ForeignKey(Faculty, to_field="name")
faculty_head = models.CharField(max_length=50)
It sounds like you are trying to assign a string "Humantities" to a ForeignKey relationship. This doesn't make sense. You need to either find or create the actual Faculty object with the name "Humanities" and assign it to the Subject. Something like this in your view (depending on how your form is set up):
if form.is_valid():
faculty_str = form.cleaned_data['faculty']
(faculty, was_created) = Faculty.objects.get_or_create(name=faculty_str, ...)
# It's hard to tell if you are using a ModelForm or just a normal Form. Anyway, assume we already have access to the Subject object
subject.faculty = faculty
subject.save()
get_or_create()
Your value is 'Humanities' perhaps you mean to search for Humanities (without quotes).
You need to create a Faculty instance first.
faculty = Faculty(name='', leader='', email='')
faculty.save()
subject.faculty = faculty
subject.save()

How to avoid duplicates in GAE datastore?

Let's say here is the database structure:
class News(db.Model):
title = db.StringProperty()
class NewsRating(db.Model):
user = db.IntegerProperty()
rating = db.IntegerProperty()
news = db.ReferenceProperty(News)
Each user can leave only one rating for each News. The following code doesn't care about duplicates:
rating = NewsRating()
rating.user = 123456
rating.rating = 1
rating.news = News.get_by_key_name('news-unique-key')
rating.put()
How should I modify that that it will allow to have only one record for each rating.user and rating.news combination? If such rating already exists, then it should be updated with new value.
Use key names and (possibly) parent entities to keep track. For instance, supposing you have a UserInfo kind, you could do it like this:
class NewsRating(db.Model):
# No explicit user reference, since it's the parent entity
rating = db.IntegerProperty(required=True)
news = db.ReferenceProperty(News) # We could get this from the key name, but this is more convenient
rating = NewsRating(parent=current_user, key_name=str(news.key().id()), news=news)
rating.put()
Attempting to add the same rating multiple times will simply overwrite the existing one, or you can use a datastore transaction to add it atomically.
Note that you should almost certainly keep a total of ratings against the News entity, rather than counting up ratings on each request, which will get less efficient as the number of ratings increases.

Resources