I have a GAE application, where I'm using the geomodel for a location based model in my database. There are two "types" of this model, however, they need to be geo-queryed together as one. The two "types" share a set of base properties, but the second type has a few more. Is there any way I can make those other properties optional rather than just setting them to bogus values?
Inside the datastore, entities are independent of each other. You can have different entities of the same Kind that have different sets of attributes. This happens very commonly if you add some new attributes on a new version of your app, and the entities that already exist in the datastore won't have those attributes.
In your code though, for any given version you end up declaring a single Model for your Kind. You can choose not to assign values to certain attributes for the different types.
Simply make sure that your code properly handles cases where attributes don't exist, or are set to None.
Related
I should model a system that clients can apply some configuration on separated entities.
Let me explain with an example:
We have users that have a config tab in their dashboards.
We have a feature to send notifications on their browsers and we have a feature which we can send an email to them.
We also have a feature as a pop-up.
The user should be able to modify our default notification message, modify our default email template, modify our default text on email or elements.
For the pop-up, The user should be able to modify the width and height of the pop-up, change the default texts, modify background color, change the button location on the pop-up.
And when I want to send an email to the user I should apply these settings on the template then send the email. Also when the front-end wants to show those pop-ups, wants to get these configs from my API and apply them.
These settings will be more and more in the future. So I can not specify a settings table with some fields. I think it is not a good idea.
What can I do? How to design and model this scenario? What are the best practices?
Can I use a NoSQL like MongoDB instead of a relational database?
Thanks a lot.
PS:
I am using Django to develop this system.
I have built similar sub-systems before, by hand.
I don't know much about Django, but do some research to see if it has any "out of the box" or community developed / open source add-ons that do what you want.
If you have to do it yourself...
A key-value pair is not going to be enough, but it's close. You only need a simple data structure:
ID (how your code recognizes this property), e.g. UserPopupBackgroundColor.
Property name (what the user see's / how they recognize this property in the UI), e.g. "Popup Background Color".
Optional - Data type. This is essential if you want to do any sensible input validation. E.g. pop up height should probably expect an integer, and have a sensible min/max value on it, where as an email address is totally different.
Optional, some kind of flag to identify valid properties.
That last flag is bit of an edge case, but it's useful if you use the subsystem to hold more properties than you want users to have access to. E.g. imagine you want to get a list of all properties and display the list to the user - are there any 'special' ones you need to filter out that they should not see?
You then need somewhere to put the values, and link them to the user:
Row ID / GUID. You can use a unique constraint across the User and PropertyID if you wanted to instead, but personally I find a unique row ID is a reliable and flexible approach for most scenarios.
UserID.
PropertyID - refers to ID mentioned above.
PropertyValue
Depending on how serious you need to get, you can dump all the values into the one PropertyValue column (assuming you're persisting this in a database) - which means that column needs to be a string, or, you can add a column per data type.
If you want to add a column per data type, don't kill yourself. The most I have ever done is:
PropertyValue_text (text/varchar)
PropertyValue_int (or double)
PropertyValue_DateTime (date/time - surprise!!)
So when I say 'column per data type' I mean per data type your stack needs/wants to handle - not the 'optional' data types you define in the logic - since that data type is partially just about input validation.
Obviously if you use different logical data types, you can map those to data type columns in the database. The reasons for doing this (using the different data types in the database are:
To reduce the amount of casting you need to do (code to database, and vis-a-versa).
To leverage database level query features, which can be useful. E.g. find emails values and verify them; find expired date values; etc.
It takes a bit of work to build all this, but it's powerful once you get set-up because you can add any number of properties. If you are using the 'full' solution with explicit data types then adding new logical data types isn't too painful if you already have a few set-up.
Before you design and build this, think about future reuse, and anyway you can package it up for later - or community use. Remember it impact all layers (UI, logic and data).
Final tip - when coming up with the property ID's (that the code uses) make them human readable, and use some sort of naming convention so that adding new ones later is easy and follows a predictable path.
Update - Defining Property and PropertyValue in database tables is an obvious way to go. Depending on the situation you can also define Property in code - especially if you don't add new ones or change existing ones very frequently. Another bonus is that if you're in an MVP situation you can use the code effectively as a stub, and build out the database/persistence part for that later.
I'm new to GAE and I'm still trying to figure things out. We're developing an Android app which uses Cloud Datastore to store images, videos, text, audios, etc. So we have now over 15 types of content objects.
I've been modelling each type of object as a distinct ndb Model class, but I'm wondering if this kind of design could affect performance.
Specifically, wouldn't it be better to write a simple class (e.g ContentObject) which simply had a content_type, and a few generic fields as string, number and blob?
I guess I'd go for the latter if I had to worry about creating/maintaining tables (or simply knowing that there are regular db tables behind).
I really like the first option, but I had to ask, just in case.
There are no performance differences to worry about between the 2 approaches.
With dedicated models you'll have to write a bit more code - each model needs to be handled separately. But it's simpler code, especially if eventually you will have some properties which only exist for some entities or are handled differently, which would require conditional logic with a generic model.
Building queries is also simpler with dedicated models if there are property differences, using a single model may require filling in unused properties (maybe by using default values) if they are used for sorting/filtering query results (entities with missing properties aren't indexed by the respective properties so they won't show up in the results).
On the other hand you'll need separate queries for each model, you can't obtain results for different kinds in the same query. And you'll need to maintain separate composite indexes for each kind (with a total limit of 200 such indexes per application).
If you're worrying about code duplication, which could also be a reason for which you'd consider a shared model, it's also possible to combine the common properties in a single ndb model class, with a single/common implementation for handling those common properties, and inherit that class in dedicated subclasses handling the differences. Something like this:
class Content(ndb.Model):
type = ndb.StringProperty() # not really needed, cls._get_kind() can be used instead
blob = ndb.StringProperty()
# other generic/common content properties and related methods
class Video(Content):
has_cc = ndb.BooleanProperty()
# other video-specific content properties and related methods
But this is just an implementation approach, from the datastore perspective you're still using dedicated models - in the above example a video entity will have a Video kind, not a Content kind.
There are no tables with the datastore, the only thing shared between entities of the same kind is their ndb model (which is specific just for the more performant ndb client library, other client libraries don't have one) and the search indexes definitions.
I am just beginning to use backbone.js for a new crash project. My app has a dynamic (data-driven) user menu. Each menu option is a set of graphs/small tables, of mixed types. For example, a Sales Overview menu option can have a page with 2 pie chart objects, 2 line charts, a bar chart, and so on. I don't know up front what the menu options are going to be, nor what each menu option will entail.
I am considering defining a bunch of generic model "classes" by extending Backbone.Model - PieModel, BarModel, DispersionModel, etc. And corresponding View classes that can render an object of a type - PieView, LineView, and so on. Then I can assemble a page by putting these together as defined by the dynamic configuration. Each model instance's data url can be easily generated on the fly, via the dynamic configuration..
My first concern was if Backbone supports a Collection of mixed Model types. This is instigated by presence of a "model" property for a Collection - does it assume homogeneity? But it also says a collection can hold an ordered set of models.... model attribute can be polymorphic... a method to get "models" held in the collection. Should I be reading this as "model objects"?
A "page" to me really is a collection of such objects. I would like to create a Collection on the fly and populate it with instances of different model types. And then render this through a View. Or, create a View with an array of various model objects and render the View, bypassing the Collection all together.
I will appreciate your inputs on the design I have outlined, and good reference on backbone, and clarity on how to deploy a Collection in mixed model cases? Perhaps there is a different, smarter way to handle such scenarios...
Thanks.
Collections only really use their model attribute when passing plain objects into its adder functions (e.g. add, push). If you take a look at the source, each adder function passes the input through _prepareModel, which checks if the input is an instance of a Backbone.Model. If it's not, it tries to instantiate a new model using the collection's model, otherwise it just returns the input untouched.
So as long as you're always adding real Model objects to your collections you should be fine using different types.
However, if you're planning to use aggregate functions that act on model attributes (e.g. pluck) you may run into errors when the function tries to get at an attribute that doesn't exist in one type of model (though most of the time I think it would just silently fail, which might be what you want).
I am not sure if I have 100% properly understood your scenario, however, I am not convinced you are thinking about this in the right way...
In my opinion, your models should contain the data, and views should represent them. As such, in a sales context you might have a SalesData model which could be displayed in PieView, BarView or TableView. Try to completely separate display logic from data - the type of chart falls under display logic in my opinion.
With the above approach, each page would then contain a set of different views, which you could potentially contain in a master view if you felt the need. Each view would have its own model (or collection depending on how you structure the data), which you can then update/manipulate using the normal Backbone methods.
As far as I know it is not possible for a collection to have different types of models contained within it, but even if it was, I would probably not recommend it as it would complicate the code a lot.
In terms of learning resources, here are a couple:
Learn Backbone JS compeltely -- javascriptissexy.com - this one is very thorough but will take some time to get through.
Backbone patterns - much quicker to get you in the right frame of mind.
I know you can set client permissions for a whole dataset like so:
<dataset name="foo" databroker="bar" client-permissions="view"/>
Is there a way to set client-permissions on just one field (similar to how other metadata like "valid" can be set for one field)?
Note: this is in Aviarc 3.5.0, so data bindings are not available.
Update: The use case I have in mind is a search parameters dataset. If I arrive at the search screen from a certain location then one parameter should be locked, because the search results should be filtered by that parameter.
Creating a new databroker for what amounts to a scratch search parameters dataset, just so I can set the read-only property on a single field, is really looking like overkill.
Update: Just to clarify, the dataset doesn't currently have any databroker bound to it, it is just used like a hash to store search parameters.
There isn't currently a way to set client-permissions on a single column/field.
It should be possible to set a datarule on a column which prevents the column being writable by anything other than dataset refreshes.
When I have individual pieces of data which should be read-only but are included in client-writable datasets, I keep copies of the data in non-client writable datasets and overwrite the client-writable ones when they get back.
As mentioned, data rules have the facility to set read-only on individual fields. They can be set on a given field for all rows, or on a field of a single row.
Adam has mentioned that creating a separate databroker for this case would be overkill, which is correct. The DataBinding layer is intended to provide this kind of specialization for certain use cases within your application.
So, you would create a DataBinding, pointing at your search DataBroker, that adds the rule you require to either an existing operation, or a new one that you define. The Dataset is then bound to the DataBinding instead of the DataBroker and from then on is used in the normal way.
The intention is that rules bound by DataBrokers apply to all data of the type supplied through that broker, so would be rules focusing on data integrity, formatting etc.
The DataBindings on the other hand are a layer within the application allowing you to bind rules relating to user interaction with the data, as in your example. It is expected that there might be multiple databindings for a given broker, each for a different application path or user task to interact with that data in a different way.
It should be possible to work around this by isolating the parameter I want to be read-only into its own dataset, and setting client-permissions to 'view' just for that parameter/dataset.
This does add the overhead of having to add a special case for that parameter, but I shouldn't need to extend it to any more special cases.
Say, there is a Page that has many blocks associated with it. And each block needs custom rendering, saving and data.
Simplest it is, from the code point of view, to define different classes (hence, models) for each of these models. Simplified as follows:
class Page(models.Model):
name = models.CharField(max_length=64)
class Block(models.Model):
page = models.ForeignKey(Page)
class Meta():
abstract = True
class BlockType1(Block):
other_data = models.CharField(max_length=32)
def render(self):
"""Some "stuff" here """
pass
class BlockType2(Block):
other_data2 = models.CharField(max_length=32)
def render(self):
"""Some "other stuff" here """
pass
But then,
Even with this code, I can't do a query like page.block_set.all() to obtain all the different blocks, irrespective of the block type.
The reason for the above is that, each model defines a different table; Working around to accomplish it using a linking model and generic foreign keys, can solve the problem, but it still leaves multiple database tables queries per page.
What would be the right way to model it? Can the generic foreign keys (or something else) be used in some way, to store the data preferably in the same database table, yet achieve inheritance paradigms.
Update:
My point was, How can I still get the OOP paradigms to work. Using a same method with so many ifs is not what I wanted to do.
The best solution, seems to me, is to create separate standard python class (Preferably in a different blocks.py), that defines a save which saves the data and its "type" by instantiating the same model. Then create a template tag and a filter that calls the render, save, and other methods based on the model's type.
Don't model the page in the database. Pages are a presentation thing.
First -- and foremost -- get the data right.
"And each block needs custom rendering, saving and data." Break this down: you have unique data. Ignore the "block" and "rendering" from a model perspective. Just define the data without regard to presentation.
Seriously. Just define the data in the model without any consideration of presentation or rending or anything else. Get the data model right.
If you confuse the model and the presentation, you'll never get anything to work well. And if you do get it to work, you'll never be able to extend or reuse it.
Second -- only after the data model is right -- you can turn to presentation.
Your "blocks" may be done simply with HTML <div> tags and a style sheet. Try that first.
After all, the model works and is very simple. This is just HTML and CSS, separate from the model.
Your "blocks" may require custom template tags to create more complex, conditional HTML. Try that second.
Your "blocks" may -- in an extreme case -- be so complex that you have to write a specialized view function to transform several objects into HTML. This is very, very rare. You should not do this until you are sure that you can't do this with template tags.
Edit.
"query different external data sources"
"separate simple classes (not Models) that have a save method, that write to the same database table."
You have three completely different, unrelated, separate things.
Model. The persistent model. With the save() method. These do very, very little.
They have attributes and a few methods. No "query different external data sources". No "rendering in HTML".
External Data Sources. These are ordinary Python classes that acquire data.
These objects (1) get external data and (2) create Model objects. And nothing else. No "persistence". No "rendering in HTML".
Presentation. These are ordinary Django templates that present the Model objects. No external query. No persistence.
I just finished a prototype of system that has this problem in spades: a base Product class and about 200 detail classes that vary wildly. There are many situations where we are doing general queries against Product, but then want to to deal with the subclass-specific details during rendering. E.g. get all Products from Vendor X, but display with slightly different templates for each group from a specific subclass.
I added hidden fields for a GenericForeignKey to the base class and it auto-fills the content_type & object_id of the child class at save() time. When we have a generic Product object we can say obj = prod.detail and then work directly with the subclass object. Took about 20 lines of code and it works great.
The one gotcha we ran into during testing was that manage.py dumpdata followed by manage.py loaddata kept throwing Integrity Errors. Turns out this is a well-known problem and a fix is expected in the 1.2 release. We work around it by using mysql commands to dump/reload the test dataset.