which is best to validation in serializer or in model (inside models.py save method) in django? - django-models

i am confuse on which is best to validation in serializer or in model (inside models.py save method) in django?
Serializer code
def save(self, force_insert=False, force_update=False, using=None,update_fields=None):
if self.x > self.y:
raise BadRequest(details={'message':'x should be less than y.'})
return super(xx, self).save()
or
Models code
def validate(self, attrs):
if attrs['x'] > attrs['y']:
raise BadRequest(details={'message':'x should be less than y.'})
return attrs
which is most best practical?
and how we can achieved thick model and thin view?

There is not a best method. Both methods are valid depending on your architecture.
I personally try to add any validation such as this one directly on the model. This way no matter where the data comes from, it will always get validated. For example you may wish to also apply this validation when using your django admin - if you used a serializer, the django admin request would go past this validation as it would ignore the serializer.
Working with multiple developers is also a consideration. One less familiar with the project developer may not make use of the serializer which has the validation.
Again it depends on the architecture, sometimes it makes sense to have the validation on the serializer or view. I would always consider adding it on the model first to prevent data corruption from anything hitting your model.
Here's more reading if you wish.

Related

Cakephp 3: Calling Table functions from Entity is a bad or good idea?

When I have some entity and I wanna save, validade or delete. Why do I have to call the Table method? For example:
$articlesTable = TableRegistry::get('Articles');
$article = $articlesTable->get(12);
$article->title = 'CakePHP is THE best PHP framework!';
$articlesTable->save($article);
Why isn't like this:
$article->save();
or $article->delete();
It's very simple to implement:
On my Article Entity I can do it like:
namespace App\Model\Entity;
use Cake\ORM\Entity;
class Article extends Entity
{
public function save()
{
$table = TableRegistry::get($this->source());
$table->save($this);
}
}
It's working, but I would like to know if its a bad practice or a good idea.
Thanks in advance :)
TL;DR: Technically you can do it for the high price of tight coupling (which is considered bad practice).
Explanation: I wouldn't consider this best practice because the entity is supposed to be a dumb data object. It should not contain any business logic. Also usually it's not just a simple save call but there is some follow up logic to implement: Handle the success and failure of the save and act accordingly by updating the UI or sending a response. Also you effectively couple the entity with a specific table. You turn a dumb data object into an object that implements business logic.
Technically you can do it this way and I think there are frameworks or ORMs that do it this way but I'm not a fan of coupling things. I prefer to try to write code as lose coupled as possible. See also SoC.
Also I don't think you'll save any lines of code with your approach, you just move it to a different place. I don't see any benefit that would justify the introduction of coupling the entity to the business logic.
If you go your path I would implement that method as a trait or use a base entity class to inherit from to avoid repeating the code.

Adding validation to controllers in Cake 3.x

In CakePHP 3.x is it acceptable to add validation rules within a controller?
I've read http://book.cakephp.org/3.0/en/core-libraries/validation.html but it doesn't actually say where you (can / should) add your methods.
I understand that typically these go in src/Model/Table/ModelName.php. However I'm trying to validate a form which is not tied to a particular database table and doesn't need a corresponding model.
I'm familar with Cake 2.x where I would typically do this in the controller, or possibly add a model with $useTable = false. But in this case the simplest method seems to add the rules directly in the controller, but I wasn't sure whether this is bad practice. If the rules don't go in the controller where should they be put?
Context - this is a form where the user is doing a search. It requires some input and I'm trying to validate 3 fields: email, quantity and a postcode. Cake's validator has inbuilt features to do the first two but in the case of postcode I'll need to add a custom method.
Any advice appreciated.
In CakePHP 3.x is it acceptable to add validation rules within a controller?
Technically possible, but I would consider it as bad practice.
I understand that typically these go in src/Model/Table/ModelName.php. However I'm trying to validate a form which is not tied to a particular database table and doesn't need a corresponding model.
There is a whole section called "Modelless Forms" in the book that covers that use case.

NDB Jinja2 best way to access KeyProperty

i've this model
class Team(ndb.Model):
name = ndb.StringProperty()
password = ndb.StringProperty()
email = ndb.StringProperty()
class Offer(ndb.Model):
team = ndb.KeyProperty(kind=Team)
cut = ndb.StringProperty()
price = ndb.IntegerProperty()
class Call(ndb.Model):
name = ndb.StringProperty()
called_by = ndb.KeyProperty(kind=Team)
offers = ndb.KeyProperty(kind=Offer, repeated=True)
status = ndb.StringProperty(choices=['OPEN', 'CLOSED'], default="OPEN")
dt = ndb.DateTimeProperty(auto_now_add=True)
i've this view
class MainHandler(webapp2.RequestHandler):
def get(self):
calls_open = Call.query(Call.status == "OPEN").fetch()
calls_past = Call.query(Call.status == "CLOSED").fetch()
template_values = dict(open=calls_open, past=calls_past)
template = JINJA_ENVIRONMENT.get_template('templates/index.html')
self.response.write(template.render(template_values))
and this small test tempalte
{% for call in open %}
<b>{{call.name}} {{call.called_by.get().name}}</b>
{% endfor %}
now, with the get() it works perfectly.
my question is: is this correct?
is there a better way to do it?
personally i found it strange to get() the values in the template and i would prefer to fetch it inside the view.
my idea was to:
create a new list res_open_calls=[]
for all the call in calls_open call the to_dict() dict_call = call.to_dict()
then assign to the dict_call dict_call['team'] = call.team.get().to_dict()
add the object to the list res_open_calls.append(dict_call)
then return this just generated list.
this is the gist i wrote ( for a modified code) https://gist.github.com/esseti/0dc0f774e1155ac63797#file-call_offers_calls
it seems more clean but a bit more expensive (a second list has to be generated). is there something better/clever to do?
The OP is clearly showing code very different from the one they're using: they show called_by as a StringProperty so calling get on it should crash, they talk about a call.team that doesn't exist in the code they show... anyway, I'm trying to guess what they actually have, because I find the underlying idea is important.
The OP, IMHO, is correct to be uncomfortable about having DB operations right in a Jinjia2 template, which would be best limited to presentation-level issues. I'll assume (guess!) that part of the Call model is:
class Call(ndb.Model):
team = ndb.KeyProperty(kind=Team)
and the relevant part of the Jinja2, currently working for the OP, is:
{{{{call.team.get().name}}
A better structure might then be:
class Call(ndb.Model):
team = ndb.KeyProperty(kind=Team)
#property
def team_name(self):
return self.team.get().name
and in the template just {{call.teamname}}.
This still performs the DB operation during template expansion, but it does so on the Python code side of things, rather than the Jinja2 side of things -- better than embodying so much detail about the model's data architecture in a template that should focus on presentation only.
Alternatively, if a Call instance is .put rarely and displayed often, and its team does not change name, one could, so to speak, cache the value in a ComputedProperty:
class Call(ndb.Model):
team = ndb.KeyProperty(kind=Team)
def _team_name(self):
return self.team.get().name
team_name = ComputedProperty(self._team_name)
However, this latter choice is inferior (as it involves more storage space, does not save execution time, and complicates actual interactions with the datastore) unless some queries for Call entities also need to query on team_name (in which latter case it would be a must).
If one did chose this alternative, the Jinjia2 template would still use {{call.teamname}}: this hints at why it's best to use in templates only logic strictly connected to presentation -- it leaves more degrees of freedom for implementing attributes and properties on the Python code side of things, without needing to change the templates. "Separation of concerns" is an excellent principle in programming.
The snippet posted elsewhere suggests a higher degree of complication, where Call is indeed as shown but then of course there is no call.team as shown repeatedly in the question -- rather, a double indirection via call.offers and each offer.team. This makes sense in terms of entity-relationship modeling but can be heavy-going to implement in the essentially "normalized" terms the snippet suggests in any NoSQL database, including GAE's datastore.
If teams don't change names, and calls don't change their list of offers, it might show better performance to denormalize the model (storing in Call the technically redundant information that, in the snippet, is fetched by running through the double indirection) -- e.g by structured properties, https://cloud.google.com/appengine/docs/python/ndb/properties#structured , to embed copies of the Offer objects in Call entities, and a copy of the Team object (or even just the team's name) in the Offer entity.
Like all de-normalizing, this can take a few extra bytes per entity in the datastore, but nevertheless could amply pay for it by minimizing the number of datastore accesses needed at fetch time, depending on the pattern of accesses to the various entities and properties.
However, by now we're straying far away from the question, which is about what to put in the template, what on the Python side. Optimizing datastore patterns is a separate issue well worth of Qs of its own.
Summarizing my stance on the latter, core issue of Python code vs template as residence for logic: data-access logic should be on the Python code side, ideally embedded in Model classes (using property for just-in-time access, possibly all the way to denormalization at entity-building or perhaps at entity-finalization time); Jinjia2 templates (or any other kind of pure presentation layer) should only have logic directly needed for presentation, not for data access (nor business logic either of course).

Data validation in Silverlight 4 - Entity level validation vs ViewModel validation

I'm working with Silverlight 4, MVVM, WCF RIA and Entity Framework. As I know, there are two ways to do data validation. First is entity level validation, second is write down validation logic in ViewModel.
Currently I create validation logic inside ViewModel, so I want to know pros and cons of each way.
DataAnnotation attributes can be applied to ViewModel too. But problems are the same:
Throwing of exceptions on validation errors - noise in the output window
Setting some default value to a property, it throws exceptions and set an invalid state
Impossible to validate a model completely and receive all its errors.
Impossible to add or clear errors in code.
The advantage is simplicity of data annotations in comparison with other ways.
On the other hand, the INotifyDataErrorInfo interface allows to perform validation asynchronously. As it was mentioned in other answer, if you want to check whether a username is already exist in the database, you can send a request to the service and add an error to the UI after receiveing an asyncronous callback.
I prefer to use INotifyDataErrorInfo and although it requires more code than data annotations, it can be reduced by creating a sort of generic validator class:
this.Validator = new ModelValidator<ProfileViewModel>(this);
this.Validator.AddValidationFor(() => this.SelectedCountry).NotNull().Show("Select country");
this.PropertyChanged += new PropertyChangedEventHandler(this.ValidateChangedProperty);
It's a bit of a cop out but you'll probably end up needing to do both types of validation.
Entity level validation is useful as you only have to define it in one place and you get UI validation messages and entity validation before it gets saved to the database (assuming that data is being saved to a db).
The trouble is that entity level validation is fairly basic and you'll probably need to make some service calls to do custom validation (for example, we validate that a user exists on our network for a provided username in our create user form). This is where you need to do validation in your VM.

Django models generic modelling

Say, there is a Page that has many blocks associated with it. And each block needs custom rendering, saving and data.
Simplest it is, from the code point of view, to define different classes (hence, models) for each of these models. Simplified as follows:
class Page(models.Model):
name = models.CharField(max_length=64)
class Block(models.Model):
page = models.ForeignKey(Page)
class Meta():
abstract = True
class BlockType1(Block):
other_data = models.CharField(max_length=32)
def render(self):
"""Some "stuff" here """
pass
class BlockType2(Block):
other_data2 = models.CharField(max_length=32)
def render(self):
"""Some "other stuff" here """
pass
But then,
Even with this code, I can't do a query like page.block_set.all() to obtain all the different blocks, irrespective of the block type.
The reason for the above is that, each model defines a different table; Working around to accomplish it using a linking model and generic foreign keys, can solve the problem, but it still leaves multiple database tables queries per page.
What would be the right way to model it? Can the generic foreign keys (or something else) be used in some way, to store the data preferably in the same database table, yet achieve inheritance paradigms.
Update:
My point was, How can I still get the OOP paradigms to work. Using a same method with so many ifs is not what I wanted to do.
The best solution, seems to me, is to create separate standard python class (Preferably in a different blocks.py), that defines a save which saves the data and its "type" by instantiating the same model. Then create a template tag and a filter that calls the render, save, and other methods based on the model's type.
Don't model the page in the database. Pages are a presentation thing.
First -- and foremost -- get the data right.
"And each block needs custom rendering, saving and data." Break this down: you have unique data. Ignore the "block" and "rendering" from a model perspective. Just define the data without regard to presentation.
Seriously. Just define the data in the model without any consideration of presentation or rending or anything else. Get the data model right.
If you confuse the model and the presentation, you'll never get anything to work well. And if you do get it to work, you'll never be able to extend or reuse it.
Second -- only after the data model is right -- you can turn to presentation.
Your "blocks" may be done simply with HTML <div> tags and a style sheet. Try that first.
After all, the model works and is very simple. This is just HTML and CSS, separate from the model.
Your "blocks" may require custom template tags to create more complex, conditional HTML. Try that second.
Your "blocks" may -- in an extreme case -- be so complex that you have to write a specialized view function to transform several objects into HTML. This is very, very rare. You should not do this until you are sure that you can't do this with template tags.
Edit.
"query different external data sources"
"separate simple classes (not Models) that have a save method, that write to the same database table."
You have three completely different, unrelated, separate things.
Model. The persistent model. With the save() method. These do very, very little.
They have attributes and a few methods. No "query different external data sources". No "rendering in HTML".
External Data Sources. These are ordinary Python classes that acquire data.
These objects (1) get external data and (2) create Model objects. And nothing else. No "persistence". No "rendering in HTML".
Presentation. These are ordinary Django templates that present the Model objects. No external query. No persistence.
I just finished a prototype of system that has this problem in spades: a base Product class and about 200 detail classes that vary wildly. There are many situations where we are doing general queries against Product, but then want to to deal with the subclass-specific details during rendering. E.g. get all Products from Vendor X, but display with slightly different templates for each group from a specific subclass.
I added hidden fields for a GenericForeignKey to the base class and it auto-fills the content_type & object_id of the child class at save() time. When we have a generic Product object we can say obj = prod.detail and then work directly with the subclass object. Took about 20 lines of code and it works great.
The one gotcha we ran into during testing was that manage.py dumpdata followed by manage.py loaddata kept throwing Integrity Errors. Turns out this is a well-known problem and a fix is expected in the 1.2 release. We work around it by using mysql commands to dump/reload the test dataset.

Resources