Pros and Cons: Catching database errors vs. Validating user input - database

I have an application with a database that has a lot of foreign keys.
A lot of the data being inserted comes from users.
I was just wondering is: Is it better to run validation on user input before inserts or insert and just write error catching code?
Are there performance/stylistic/security benefits to either?
The knee jerk response to me seems to be do both, but doing validation before seems a safer option if only one of the two was done.

This is a brilliant question - and in my view, one of the benefits of the use of an object relational mapper.
In general, the relational tools your database provides are all about protecting data validity - "customer" must have an "account manager" relationship, "user_name" may not be null, "user_id" must be unique etc.
Those tools are necessary, but not sufficient, to validate the data users input into the database.
Your front-end/middle tier code has its own rules - which are usually not expressed in relational terms; in most modern development languages, they're about objects and the relationships between the objects, or their attributes - for instance, a phone number must contain numbers, a name must start with a capital letter.
I'm assuming your users don't interact with the database through SQL - that you've built some kind of user interface which allows them to look up associations (and thus populate the foreign key).
In this case, my preferred architecture is:
Validate as early as you can - in JavaScript for web apps, or in the GUI code for desktop apps. This reduces the number of round trips, and thus creates a more responsive user experience.
Have each layer implement validation for its key domain logic - your classes should validate their expectations, your database should validate the foreign keys and nullability.
Don't depend on "higher" levels to validate - you can't predict how your code is going to be used in the future; a class you've written for one application may get re-used by another.
Work out a way to keep the validation rules in sync across the layers - either through technology or process.

I don't think it should be a question of which. I always do both, as you suggested. As far as pros and cons, I will say that I think it is easier to find user input problems and handle them in a user friendly manner earlier in the life cycle, as opposed to waiting until the database throws back an error. However, if your validation does not catch everything or if there is a database error unrelated to user input, it is still important to catch the error at some point.

No matter what, you'll need to trap errors. Who knows what will happen? SQL timeout? No amount of data validation will help on those kind of problems.
Like most problems, they are cheapest to catch early; don't let users enter invalid data/validate data before sending it to a server. Not business logic per se, but that dates don't contain alpha characters, etc.
In a mid-tier, I would still validate inputs. Another developer may call your business logic without vetting the inputs. I prefer my business logic in this area because it is easier to write and debug than creating logic in stored procedures.
On the database, I prefer constraints when rules are absolutes such as you can't have an internal user without a user id; the reason is to prevent other developers from writing scripts that would do things that should not be allowed.

Related

Why repeat database constraints in models?

In a CakePHP application, for unique constraints that are accounted for in the database, what is the benefit of having the same validation checks in the model?
I understand the benefit of having JS validation, but I believe this model validation makes an extra trip to the DB. I am 100% sure that certain validations are made in the DB so the model validation would simply be redundant.
The only benefit I see is the app recognizing the mistake and adjusting the view for the user accordingly (repopulating the fields and showing error message on the appropriate field; bettering the ux) but this could be achieved if there was a constraint naming convention and so the app could understand what the problem was with the save (existing method to do this now?)
Quicker response times, less database load. The further out to the client you can do validation, i.e. JavaScript, the quicker it is. The major con is having to implement the same rules in multiple layers.
If database constraints are coded by one person and the rest of the code is code by another, they really shouldn't trust each other completely. Check things at boundaries, especially if they represent organizational-people boundaries. e.g. user to application or one developers module to another, or one corporate department to another.
Don't forget the matter of portability. Enforcing validation in the model keeps your application database-agnostic. You can program an application against a SQLite database, and then deploy to MySQL.. oh wait, you don't have that.. PostgreSQL? No? Oh, Oracle, fine.
Also, in production, if a database error happens on a typical controller action that saves and then redirects, your user will be stuck staring at a blank white page (since errors are off, there was no view to output, and the redirect never happened). Basically, database errors are turned off in production mode as they can give insight into the DB schema, whereas model validation errors remain enabled as they are user-friendly.
You have a point though, is it possible to capture these database errors and do something useful with them? Currently no, but it would be nice if CakePHP could dynamically translate them into failed model validation rules, preventing us from repeating ourselves. Different databases throw different looking errors, so each DBO datasource would need updated to support this before it could happen.
Just about any benefit that you might gain would probably be canceled out by the hassle of maintaining the contraints in duplicate. Unless you happen to have an easy mechanism for specifying them in a single location and keeping them in sync, I would recommend sticking with one location of the other.
Validation in CakePHP happens before save or update query is sent to the database. Therefore it reduces the database load. You are wrong in your belief that the model validation makes an extra trip to the database. By default, validation occurs before save.
Ideally the design of the model should come first (based on user stories, use cases, etc.) with the database schema deriving from the model. Then the database implementation can either be generated from the model (explicitly tying both to a single source), or the database constraints can be designed based on relational integrity requirements (which are conceptually different from, and generally have a different granularity and vocabulary than, the model, although in many cases there is a mapping of some kind.
I generally have in mind only relational integrity requirements for database constraints. There are too many cases where the granularity and universal applicability of business constraints are too incongruent, transitory, and finer-grained than the database designer knows about; and they change more frequently over time and across application modules.
# le dorfier
Matters of data integrity and matters of business rules are one and the same thing (modulo the kind of necessarily procedural "business rules "stuff such as "when a new customer is entered, a mail with such-and-so content must be sent to that customer's email address).
The relational algebra is generally accepted to be "expressively complete" (I've even been told there is formal proof that RA plus TC is Turing complete). Therefore, RA (plus TC) can express "everything that is wrong" (wrong in the sense that it violates some/any arbitrary "business rule").
So enforcing a rule that "the set of things that are wrong must remain empty" boils down to the dbms enforcing each possible conceivable (data-related, i.e. state-related) business rule for you, without any programmer having to write the first byte of code to achieve such business rule enforcement.
You bring up "business rules change frequently" as an argument. If business rules change, then what would be the scenario that allows for the quickest adaptation of the system to such a business rule : only having to change the corresponding expression of the RA that enforces the constraint/business rule, or having to go find out where in the application code the rule is enforced and having to change all that ?
# bradley harris.
Correct. I'd vote you up if voting was available to me. But you forget to mention that since one can never be really certain that some database will never be needed by some other app, the only sensible place to do business rules enforcement is inside the DBMS.

How much business logic should be in the database?

I'm developing a multi-user application which uses a (postgresql-)database to store its data. I wonder how much logic I should shift into the database?
e.g. When a user is going to save some data he just entered. Should the application just send the data to the database and the database decides if the data is valid? Or should the application be the smart part in the line and check if the data is OK?
In the last (commercial) project I worked on, the database was very dump. No constraits, no views etc, everything was ruled by the application. I think that's very bad, because every time a certain table was accesed in the code, there was the same code to check if the access is valid repeated over and over again.
By shifting the logic into the database (with functions, trigers and constraints), I think we can save a lot of code in the application (and a lot of potential errors). But I'm afraid of putting to much of the business-logic into the database will be a boomerang and someday it will be impossible to maintain.
Are there some real-life-approved guidelines to follow?
If you don't need massive distributed scalability (think companies with as much traffic as Amazon or Facebook etc.) then the relational database model is probably going to be sufficient for your performance needs. In which case, using a relational model with primary keys, foreign keys, constraints plus transactions makes it much easier to maintain data integrity, and reduces the amount of reconciliation that needs to be done (and trust me, as soon as you stop using any of these things, you will need reconciliation -- even with them you likely will due to bugs).
However, most validation code is much easier to write in languages like C#, Java, Python etc. than it is in languages like SQL because that's the type of thing they're designed for. This includes things like validating the formats of strings, dependencies between fields, etc. So I'd tend to do that in 'normal' code rather than the database.
Which means that the pragmatic solution (and certainly the one we use) is to write the code where it makes sense. Let the database handle data integrity because that's what it's good at, and let the 'normal' code handle data validity because that's what it's good at. You'll find a whole load of cases where this doesn't hold true, and where it makes sense to do things in different places, so just be pragmatic and weigh it up on a case by case basis.
Two cents: if you choose smart, remember not to go in the "too smart" field. The database should not deal with inconsistencies that are inappropriate for its level of understanding of the data.
Example: suppose you want to insert a valid (checked with a confirmation mail) email address in a field. The database could check if the email actually conforms to a given regular expression, but asking the database to check if the email address is valid (e.g. checking if the domain exists, sending the email and handling the response) it's a bit too much.
It's not meant to be a real case example. Just to illustrate you that a smart database has limits in its smartness anyway, and if an unexistent email address gets into it, the data is still not valid, but for the database is fine. As in the OSI model, everything should handle data at its level of understanding. ethernet does not care if it's transporting ICMP, TCP, if they are valid or not.
I find that you need to validate in both the front end (either the GUI client, if you have one, or the server) and the database.
The database can easily assert for nulls, foreign key constraints etc. i.e. that the data is the right shape and linked up correctly. Transactions will enforce atomic writes of this. It's the database's responsibility to contain/return data in the right shape.
The server can perform more complex validations (e.g. does this look like an email, does this look like a postcode etc.) and then re-structure the input for insertion into the database (e.g. normalise it and create the appropriate entities for insertion into the tables).
Where you put the emphasis on validation depends to some degree on your application. e.g. it's useful to validate a (say) postcode in a GUI client and immediately provide feedback, but if your database is used by other applications (e.g. an application to bulkload addresses) then your layer surrounding the database needs to validate as well. Sometimes you end up providing validation in two different implementations (e.g. in the above, perhaps a Javascript front-end and a Java DAO backend). I've never found a good strategic solution to this.
Using the common features of relational databases, like primary and foreign key constraints, datatype declarations, etc. is good sense. If you're not going to use them they why bother with a relational db?
That said, all data should be validated for both type and business rules before it hits the db. Type validation is just defensive programming- assume users are out to hack you and then you'll get fewer unpleasant surprises. Business rules are what your application is all about. If you make them part of the structure of your db they become much more tightly bound to how your app works. If you put them in the application layer, it's easier to change them if business requirements change.
As a secondary consideration: clients often have less choice about which db they use (postgresql, mysql, Oracle, etc) than which application language they have available. So if there is a good chance that your application will be installed on many different systems, your best bet is to make sure that your SQL is as standard as possible. This may will mean that constructing language agnostic db features like triggers, etc. will be more trouble than putting that same logic in your application layer.
It depends on the application :)
For some applications the dumb database is the best. For example Google's applications run on a big dumb database that can't even do joins because the need amazing scalability to be able to serve millions of users.
On the other hand, for some internal enterprise app it can be beneficial to go with very smart database as those are often used in more than just application and therefore you want a single point of control - think of employees database.
That said if your new application is similar to the previous one, I would go with dumb database. In order to eliminate all the manual checks and database access code I would suggest using an ORM library such as Hibernate for Java. It will essentially automate your data access layer but will leave all the logic to your application.
Regarding validation it must be done on all levels. See other answers for more details.
One other item of consideration is deployment. We have an application where the deployment of database changes is actually much easier for remote installations than the actual code base is. For this reason, we've put a lot of application code in stored procedures and database functions.
Deployment is not your #1 consideration but it can play an important role in deciding b/t various choices
This is as much a people question as it is a technology question. If your application is the only application that's ever going to manipulate the data (which is rarely the case, even if you think that's the plan), and you've only got application coders to hand, then by all means keep all the logic in the application.
On the other hand, if you've got DBAs who can handle it, or you know that more than one app will need to have its access validated, then managing data actually in the database makes a lot of sense.
Remember, though, that the best things for the database to be validating are a) the types of the data and b) relational constraints, which anything calling itself an RDBMS should have a handle on anyway.
If you've got any transactions in your application code, it's also worthwhile asking yourself whether they should be pushed to the database as a stored procedure so that it's impossible for them to be incorrectly reimplemented elsewhere.
I do know of shops where the only access allowed to the database is via stored procedures, so the DBAs have full resposibility for both the data storage semantics and access restrictions, and anyone else has to go through their gateways. There are obvious advantages to this, especially if more than one application has to have access to the data. Whether you go quite that far is up to you, but it's a perfectly valid approach.
While I believe that most data should be validated from the user interface (why send known bad stuff across the network tying up resources?), I also believe it is irresponsible not to put constraints on the database as the user interface is unlikely to be the only way that data ever gets into the database. Data also comes in from imports, other applications, quick script fixes for problems run at the query window, mass updates run (to update all prices by 10% for example). I want all bad records rejected no matter what their source and the database is the only place where you can be assured that will happen. To skip the database integrity checks because the user interface does it is to guarantee that you will most likely eventually have data integrity issues and then all of your data become meaningless and useless.
e.g. When a user is going to save some
data he just entered. Should the
application just send the data to the
database and the database decides if
the data is valid? Or should the
application be the smart part in the
line and check if the data is OK?
Its better to have the validation in the front end as well as the server side. So if the data is invalid the user will be notified immediately. Otherwise he will have to wait for the DB to respond after a post back.
When security is concerned its better to validate at both the ends. Front end as well as DB. Or how can the DB trust all the data that is sent by the application ;-)
Validation should be done on the client-side and server side and once it valid then it should be stored.
The only work that the database should do is any querying logic. So update rows, inserting rows, selects and everything else should be handled by the server side logic since thats where the real meat of the application lives.
Structuring your insert properly will handle any foreign Key constraints. Getting your business logic to call a sproc will insert data in the correct format. I don't really consider this validation but some people might.
My decision is : never use stored procedure in database. Stored procedure is not portable.

Should data validation be done at the database level?

I am writing some stored procedures to create tables and add data. One of the fields is a column that indicates percentage. The value there should be 0-100. I started thinking, "where should the data validation for this be done? Where should data validation be done in general? Is it a case by case situation?"
It occurs to me that although today I've decided that 0-100 is a valid value for percentage, tomorrow, I might decide that any positive value is valid. So this could be a business rule, couldn't it? Should a business rule be implemented at the database level?
Just looking for guidance, we don't have a dba here anymore.
Generally, I would do validations in multiple places:
Client side using validators on the aspx page
Server side validations in the code behind
I use database validations as a last resort because database trips are generally more expensive than the two validations discussed above.
I'm definitely not saying "don't put validations in the database", but I would say, don't let that be the only place you put validations.
If your data is consumed by multiple applications, then the most appropriate place would be the middle tier that is (should be) consumed by the multiple apps.
What you are asking in terms of business rules, takes on a completely different dimension when you start thinking of your entire application in terms of business rules. If the question of validations is small enough, do it in individual places rather than build a centralized business rules system. If it is a rather large system, them you can look into a business rules engine for this.
If you have a good data access tier, it almost doesn't matter which approach you take.
That said, a database constraint is a lot harder to bypass (intentionally or accidentally) than an application-layer constraint.
In my work, I keep the business logic and constraints as close to the database as I can, ensuring that there are fewer potential points of failure. Different constraints are enforced at different layers, depending on the nature of the constraint, but everything that can be in the database, is in the database.
In general, I would think that the closer the validation is to the data, the better.
This way, if you ever need to rewrite a top level application or you have a second application doing data access, you don't have two copies of the (potentially different) code operating on the same data.
In a perfect world the only thing talking (updating, deleting, inserting) to your database would be your business api. In the perfect world databae level constraints are a waste of time, your data would already have been validated and cross checked in your business api.
In the real world we get cowboys taking shortcuts and other people writing directly to the database. In this case some constraints on the database are well worth the effort. However if you have people not using your api to read/write you have to consider where you went wrong in your api design.
It would depend on how you are interacting with the database, IMO. For example, if the only way to the database is through your application, then just do the validation there.
If you are going to allow other applications to update the database, then you may want to put the validation in the database, so that no matter how the data gets in there it gets validated at the lowest level.
But, validation should go on at various levels, to give the user the quickest opportunity possible to know that there is a problem.
You didn't mention which version of SQL Server, but you can look at user defined datatypes and see if that would help you out, as you can just centralize the validation.
I worked for a government agency, and we had a -ton- of business rules. I was one of the DBA's, and we implemented a large number of the business rules in the database; however, we had to keep them pretty simple to avoid Oracle's dreaded 'mutating table' error. Things get complicated very quickly if you want to use triggers to implement business rules which span several tables.
Our decision was to implement business rules in the database where we could because data was coming in through the application -and- through data migration scripts. Keeping the business rules only in the application wouldn't do much good when data needed to be migrated in to the new database.
I'd suggest implementing business rules in the application for the most part, unless you have data being modified elsewhere than in the application. It can be easier to maintain and modify your business rules that way.
One can make a case for:
In the database implement enough to ensure overall data integrity (e.g. in SO this could be every question/answer has at least one revision).
In the boundary between presentation and business logic layer ensure the data makes sense for the business logic (e.g. in SO ensuring markup doesn't contain dangerous tags)
But one can easily make a case for different places in the application layers for every case. Overall philosophy of what the database is there for can affect this (e.g. is the database part of the application as a whole, or is it a shared data repository for many clients).
The only thing I try to avoid is using Triggers in the database, while they can solve legacy problems (if you cannot change the clients...) they are a case of the Action at a Distance anti-pattern.
I think basic data validation like you described makes sure that the data entered is correct. The applications should be validating data, but it doesn't hurt to have the data validated again on the database. Especially if there is more than one way to access the database.
You can reasonable restrict the database so that the data always makes sense. A database will support multiple applications using the same data so some restrictions make sense.
I think the only real cost in doing so would be time. I think such restrictions aren't a big deal unless you are doing something crazy. And, you can change the rules later if needed (although some changes are obviously harder than others)
First ideal: have a "gatekeeper" so that your data's consistency does not depend upon each developer applying the same rules. Simple validation such as range validation may reasonably be implemented in the DB. If it changes at least you have somewhere to put.
Trouble is the "business rules" tend to get much more complex. It can be useful to offload processing to the application tier where OO languages can be better for managing complex logic.
The trick then is to structure the app tier so that the gatekeeper is clear and unduplicated.
In a small organisation (no DBA ergo, small?) I would tend to put the business rules where you have strong development expertise.
This does not exclude doing initial validation in higher levels, for example you might validate all the way up in the UI to help the user get it right, but you don't depend upon that initial validation - you still have the gatekeeper.
If you percentage is always 'part divided by whole' (and you don't save part and whole values elsewhere), then checking its value against [0-100] is appropriate at db level. Additional constraints should be applied at other levels.
If your percentage means some kind of growth, then it may have any kind of values and should not be checked at db level.
It is case by case situation. Usually you should check at db level only constraints, which can never change or have natural limits (like first example).
Richard is right: the question is subjective the way it has been asked here.
Another take is: what are the schools of thought on this? Do they vary by sector or technology?
I've been doing Ruby on Rails for a bit now, and there, even relationships between records (one-to-many etc.) are NOT respected on the DB level, not to mention cascade deleting and all that stuff. Neither are any kind of limits aside from basic data types, which allow the DB to do its work. Your percentage thing is not handled on the DB level but rather at the Data Model level.
So I think that one of the trends that we're seeing lately is to give more power to the app level. You MUST check the data coming in to your server (so somewhere in the presentation level) and you MIGHT check it on the client and you MIGHT check again in the business layer of your app. Why would you want to check it again at the database level?
However: the darndest things do happen and sometimes the DB gets values that are "impossible" reading the business-layer's code. So if you're managing, say, financial data, I'd say to put in every single constraint possible at every level. What do people from different sectors do?

Where to perform the data validation for a desktop application? On the database or in code?

In a single-user desktop application that uses a database for storage, is it necessary to perform the data validation on the database, or is it ok to do it in code? What are the best practices, and if there are none, what are the advantages and disadvantages of each of the two possibilities?
Best practice is both. The database should be responsible for ensuring its own state is valid, and the program should ensure that it doesn't pass rubbish to the database.
The disadvantage is that you have to write more code, and you have a marginal extra runtime overhead - neither of which are usually particularly good reasons not to do it.
The advantage is that the database ensures low-level validity, but the program can help the user to enter valid data much better than by just passing back errors from the database - it can intervene earlier and provide UI hints (e.g. colouring invalid text fields red until they have been completed correctly, etc)
-- edit (more info promoted from comments) --
The smart approach in many cases is to write a data driven validator at each end and use a shared data file (e.g. XML) to drive the validations. If the spec for a validation changes, you only need to edit the description file and both ends of the validation will be updated in sync. (no code change).
You do both.
The best practice for data validation is to sanitize your program's inputs to the database. However, this does not excuse the database of having its own validations. Programming your validations in your code only accounts for deltas produced in your managed environment. It does not account for corrupted databases, administration error, and the remote/future possibility that your database will be used by more than one application, in which case the application-level data validation logic should be duplicated in this new application.
Your database should have its own validation routines. You needn't think of them as cleaning the incoming data as much as it is running sanity checks/constraints/assertions. At no time should a database have invalid data in it. That's the entire point of integrity constraints.
To summarize, you do both of:
Sanitize and validate user inputs before they reach your data store.
Equip your data store with constraints that reinforce your validations.
You should always validate in the code before the data reaches the database.
Data lasts longer than applications. It hangs around for years and years. This is true even if your application doesn't handle data of interest to regulatory authorities or law enforcement agencies, but the range of data which interests those guys keeps increasing.
Also it is still more common for data to be shared between applications with an organisation (reporting, data warehouse, data hub, web services) or exchanged between organisations than it is for one application to share multiple databases. Such exchanges may involve other mechanisms for loading data as well as extracting data besides the front end application which notionally owns the schema.
So, if you only want to code your data validation rules once put them in the database. If you like belt'n'braces put them in the GUI as well.
Wouldn't it be smart to check the data before you try to store it? Database connections and resources are expensive. Try to make sure you have some sort of logic to validate the data before shipping it off to the database. I've seen some people do it on the front end, others on the back end, others even both.
It may be a good idea to create an assembly or validation tier. Validate the data and then ship it over to db.
In the application please!
Its very difficult to translate sqlerror -12345 into a message that means anything to an enduser. In many cases your user may be long gone by the time the database gets hold of the data (e.g. I hit submit then go look to see how many down votes I got in stackoverflow today).
The first prioirity is to validate the data in the application before sending it to the database.
The second priority should be to validate/screen the data at the front end to prevent the user entering invalid data or at least warn them immediatly that the data is inccorrect.
The third priority (if the application is important enough and your budget is big enough) would be for the database itself to verify the correctness of any inserts and updates via constriants and triggers etc.

Preventing bad data input

Is it good practice to delegate data validation entirely to the database engine constraints?
Validating data from the application doesn't prevent invalid insertion from another software (possibly written in another language by another team). Using database constraints you reduce the points where you need to worry about invalid input data.
If you put validation both in database and application, maintenance becomes boring, because you have to update code for who knows how many applications, increasing the probability of human errors.
I just don't see this being done very much, looking at code from free software projects.
Validate at input time. Validate again before you put it in the database. And have database constraints to prevent bad input. And you can bet in spite of all that, bad data will still get into your database, so validate it again when you use it.
It seems like every day some web app gets hacked because they did all their validation in the form or worse, using Javascript, and people found a way to bypass it. You've got to guard against that.
Paranoid? Me? No, just experienced.
It's best to, where possible, have your validation rules specified in your database and use or write a framework that makes those rules bubble up into your front end. ASP.NET Dynamic Data helps with this and there are some commercial libraries out there that make it even easier.
This can be done both for simple input validation (like numbers or dates) and related data like that constrained by foreign keys.
In summary, the idea is to define the rules in one place (the database most the time) and have code in other layers that will enforce those rules.
The disadvantage to leaving the logic to the database is then you increase the load on that particular server. Web and application servers are comparatively easy to scale outward, but a database requires special techniques. As a general rule, it's a good idea to put as much of the computational logic into the application layer and keep the interaction with the database as simple as possible.
With that said, it is possible that your application may not need to worry about such heavy scalability issues. If you are certain that database server load will not be a problem for the foreseeable future, then go ahead and put the constraints on the database. You are quite correct that this improves the organization and simplicity of your system as a whole by keeping validation logic in a central location.
There are other concerns than just SQL injection with input. You should take the most defensive stance possible whenever accepting user input. For example, a user might be able to enter a link to an image into a textbox, which is actually a PHP script that runs something nasty.
If you design your application well, you should not have to laboriously check all input. For example, you could use a Forms API which takes care of most of the work for you, and a database layer which does much the same.
This is a good resource for basic checking of vulnerabilities:
http://ha.ckers.org/xss.html
It's far too late by the time the data gets to your database to provide meaningful validation for your users and applications. You don't want your database doing all the validation since that'll slow things down pretty good, and the database doesn't express the logic as clearly. Similarly, as you grow you'll be writing more application level transactions to complement your database transactions.
I would say it's potentially a bad practice, depending on what happens when the query fails. For example, if your database could throw an error that was intelligently handled by an application, then you might be ok.
On the other hand, if you don't put any validation in your app, you might not have any bad data, but you may have users thinking they entered stuff that doesn't get saved.
Implement as much data validation as you can at the database end without compromising other goals. For example, if speed is an issue, you may want to consider not using foreign keys, etc. Furthermore, some data validation can only be performed on the application side, e.g., ensuring that email addresses have valid domains.
Another disadvantage to doing data validation from the database is that often you dont validate the same way in every case. In fact, it often depends on application logic (user roles), and sometimes you might want to bypass validation altogether (cron jobs and maintenance scripts).
I've found that doing validation in the application, rather than in the database, works well. Of course then, all the interaction needs to go through your application. If you have other applications that work with your data, your application will need to support some sort of API (hopefully REST).
I don't think there is one right answer, it depends on your use.
If you are going to have a very heavily used system, with the potential that the database performance might become a bottleneck, then you might want to move the responsibility for validation to the front-end where it is easier to scale with multiple servers.
If you have multiple applications interacting with the database, then you might not want to replicate and maintain the validation rules across multiple applications, so then the database might be the better place.
You might want a slicker input screen that doesn't just hit the user with validation warnings when they try to save a record, maybe you want to validate a field after data has been entered and it losses focus; or even as the user types, changing the font colour as validation fails/passes.
Also related to constraints, is warnings of suspect data. In my application I have hard-constraints in the database (e.g. someone can't start a job before their date of birth), but then in the front-end have warnings for data that is possibly correct, but suspect (e.g. an eight year-old starting a job).

Resources