Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
How do you handle database exceptions in your application?
Are you trying to validate data prior passing it to DB or just relying on DB schema validation logic?
Do you try to recover from some kind of DB errors (e.g. timeouts)?
Here are some approaches:
Validate data prior passing it to DB
Left validation to DB and handle DB exceptions properly
Validate on both sides
Validate some obvious constraints in business logic and left complex validation to DB
What approach do you use? Why?
Updates:
I'm glad to see growing discussion.
Let’s try to sum up community answers.
Suggestions:
Validate on both sides
Check business logic constraints on
client side, let DB do integrity checks from hamishmcn
Check early to avoid bothering DB from ajmastrean
Check early to improve user experience from Will
Keep DB interacting code in place to
simplify development from hamishmcn
Object-relational mapping (NHibernate, Linq, etc.) can help you to deal with constrains from ajmastrean
Client side validation is necessary for security reasons from Seb Nilsson
Do you have anything else to say? This is converted to Validation specific question. We are missing the core, i.e. "Database related Error best practices" which ones to handle and Which ones to Bubble up?
#aku: DRY is nice, but its not always possible. Validation is one of those places, as you will have three completely different and unrelated places where validation is not only possible but absolutely needed: Within the UI, within the business logic, and within the database.
Think of a web application. You want to reduce trips to the server, so you include javascript validation of client data entry. But you can't trust what the user enters, so you must perform validation within your business logic before touching the database. And the database must have its own validation in order to prevent data corruption.
There's no clean way to unify these three different types of validation within a single component.
There are some attempts being made to unify cross-cutting responsibilities like validation within policy injectors like the P&P group's Policy Injection Application Block combined with their Validation Application Block, but these are still code based. If you have validation that's not in code, you still have to maintain parallel logic separately...
There is one killer-reason to validate on both the client-side and on the database-side, and that is security. Especially when you start using AJAX-stuff, hackable URLs and other things that make your site (in this case) more friendly to users and hackers.
Validate on the client to provide a smooth experience to early tell the user to correct their input. Also validate in database, (or in business logic, if this is considered a totally secure gateway to the database) for security for you database.
You want to reduce unnecessary trips to the DB, so performing validation within the application is a good practice. Also, it allows you to handle data errors where it is most easy to recover from: up near the UI (whether in the controller or within the UI layer for simpler apps) where the data is entered.
There are some data errors that you can't check for programatically, however. For instance, you can't validate data on the existance of related data without roundtripping to the db. Data errors like these should be validated by the database through the use of relationships, triggers, etc.
Where you deal with errors returned by database calls is an interesting one. You could deal with them at the data layer, the business logic layer, or the UI layer. The best practice in this instance is to let those errors bubble up to the last responsible moment before handling them.
For example, if you have an ASP.NET MVC web application, you have three layers (from bottom to top): Database, controller and UI (model, controller, and view). Any errors thrown by your data layer should be allowed to bubble up into your controller. At this level your application "knows" what the user is attempting to do, and can correctly inform the user about the error, suggesting different ways to handle it. Attempting to recover from these errors within the data layer makes it much harder to know what's going on within the controller. And, of course, placing business logic within the UI is not considered a best practice.
TL;DR: Validate everywhere, handle validation errors at the last responsible moment.
I try to validate on both sides. 1 rule I always follow is never trust input from the user. Following this to it's conclusion, I will usually have some front end validation on the form/web page which will not even allow submission with improperly formed data. This is a blunt tool - meaning you can check/parse the value to make sure a date field contains a date. From there, I usually let my business logic check as to whether the data entry makes sense in context with how it was submitted. For example, does the date submitted fall into the expected range? Does the currency value submitted fall into the expected range? Finally, on the server side, Foreign Key constraints and Indexes can catch any errors that slip through, which will bubble up a DB exception as a last resort, which can be handled by the app code. I use this method because it filters out as many errors as possible before the DB call is invoked.
An object-relational mapping (ORM) tool, like NHibernate (or better yet, ActiveRecord), can help you avoid a lot of validation by allowing the data model to be built right into your code as a proper C# class. You may avoid trips to the database as well, thanks to great caching and validation models built into the framework.
In general, I try to validate data as soon as possible after it has been entered. This is so that I can give helpful messages to the user earlier than after they have clicked "submit" or the equivalent.
By the time that it comes to making the db call I am hopefull that the data I am passing should be fairly good.
I try to keep db calls in the one file (or group of files) that share helper methods make it as easy as possible for the programmer (me or whoever else adds calls) to write to a log details about the exception, and what parameters were passed in etc
The sorts of apps that I was writing (I've since moved jobs) were in-house fat-client apps.
I would try to keep the business logic in the client, and do more mechanical validation on the db (ie validation that only related to the procedure's ability to run, as opposed to higher level validation).
In short, validate where you can, and try to keep related types of validation together.
Related
I have an application with a database that has a lot of foreign keys.
A lot of the data being inserted comes from users.
I was just wondering is: Is it better to run validation on user input before inserts or insert and just write error catching code?
Are there performance/stylistic/security benefits to either?
The knee jerk response to me seems to be do both, but doing validation before seems a safer option if only one of the two was done.
This is a brilliant question - and in my view, one of the benefits of the use of an object relational mapper.
In general, the relational tools your database provides are all about protecting data validity - "customer" must have an "account manager" relationship, "user_name" may not be null, "user_id" must be unique etc.
Those tools are necessary, but not sufficient, to validate the data users input into the database.
Your front-end/middle tier code has its own rules - which are usually not expressed in relational terms; in most modern development languages, they're about objects and the relationships between the objects, or their attributes - for instance, a phone number must contain numbers, a name must start with a capital letter.
I'm assuming your users don't interact with the database through SQL - that you've built some kind of user interface which allows them to look up associations (and thus populate the foreign key).
In this case, my preferred architecture is:
Validate as early as you can - in JavaScript for web apps, or in the GUI code for desktop apps. This reduces the number of round trips, and thus creates a more responsive user experience.
Have each layer implement validation for its key domain logic - your classes should validate their expectations, your database should validate the foreign keys and nullability.
Don't depend on "higher" levels to validate - you can't predict how your code is going to be used in the future; a class you've written for one application may get re-used by another.
Work out a way to keep the validation rules in sync across the layers - either through technology or process.
I don't think it should be a question of which. I always do both, as you suggested. As far as pros and cons, I will say that I think it is easier to find user input problems and handle them in a user friendly manner earlier in the life cycle, as opposed to waiting until the database throws back an error. However, if your validation does not catch everything or if there is a database error unrelated to user input, it is still important to catch the error at some point.
No matter what, you'll need to trap errors. Who knows what will happen? SQL timeout? No amount of data validation will help on those kind of problems.
Like most problems, they are cheapest to catch early; don't let users enter invalid data/validate data before sending it to a server. Not business logic per se, but that dates don't contain alpha characters, etc.
In a mid-tier, I would still validate inputs. Another developer may call your business logic without vetting the inputs. I prefer my business logic in this area because it is easier to write and debug than creating logic in stored procedures.
On the database, I prefer constraints when rules are absolutes such as you can't have an internal user without a user id; the reason is to prevent other developers from writing scripts that would do things that should not be allowed.
OK, so let start from the beginning. For instance, I'm developing some big project. It isn't a secret for all of us that such projects contains a lot of common logic. Say, for example, that some online shop order must have some items and sum of prices of all items must be exactly 1000. So, I implement this logic in MVC3 web application and do validation on order. If there are issues it will tell user about issues and ask him to repost the form.
But project has another important part. It's DB. Here I can also wrap this validation logic in some stored procedure and add orders only through this proc to be sure that there is no inconsistencies. (Or should I use checks on table to be sure?)
Here's the stuck I run. It is necessary to dublicate logic in two places to get data integrity. Of course, I may store validation logic only on DB side. But I doubt it will decrease application performance.
I'm sure more experienced guys has a complete and elegant solution for this question.
So, how can I implement logic only in one place?
Simple answer, not to everyone's liking
Your database will outlast your client code
You will have multiple client code bases for your database
It's usual to have validation in several places: would you trust user input from a browser and skip server side validation?
Define "validation": complex business rules or data integrity? If the latter, then it should be the RDBMS' responsbility
Edit, about the latter part
Not all logic needs to be implemented in the database: only logic that will be common to several client code bases. Or you can funnel these common requests through a separate tier/service that deals with it.
Note, some business logic is based on aggregations or cross-table or whole-table checks that simply are best done in the database
I am writing some stored procedures to create tables and add data. One of the fields is a column that indicates percentage. The value there should be 0-100. I started thinking, "where should the data validation for this be done? Where should data validation be done in general? Is it a case by case situation?"
It occurs to me that although today I've decided that 0-100 is a valid value for percentage, tomorrow, I might decide that any positive value is valid. So this could be a business rule, couldn't it? Should a business rule be implemented at the database level?
Just looking for guidance, we don't have a dba here anymore.
Generally, I would do validations in multiple places:
Client side using validators on the aspx page
Server side validations in the code behind
I use database validations as a last resort because database trips are generally more expensive than the two validations discussed above.
I'm definitely not saying "don't put validations in the database", but I would say, don't let that be the only place you put validations.
If your data is consumed by multiple applications, then the most appropriate place would be the middle tier that is (should be) consumed by the multiple apps.
What you are asking in terms of business rules, takes on a completely different dimension when you start thinking of your entire application in terms of business rules. If the question of validations is small enough, do it in individual places rather than build a centralized business rules system. If it is a rather large system, them you can look into a business rules engine for this.
If you have a good data access tier, it almost doesn't matter which approach you take.
That said, a database constraint is a lot harder to bypass (intentionally or accidentally) than an application-layer constraint.
In my work, I keep the business logic and constraints as close to the database as I can, ensuring that there are fewer potential points of failure. Different constraints are enforced at different layers, depending on the nature of the constraint, but everything that can be in the database, is in the database.
In general, I would think that the closer the validation is to the data, the better.
This way, if you ever need to rewrite a top level application or you have a second application doing data access, you don't have two copies of the (potentially different) code operating on the same data.
In a perfect world the only thing talking (updating, deleting, inserting) to your database would be your business api. In the perfect world databae level constraints are a waste of time, your data would already have been validated and cross checked in your business api.
In the real world we get cowboys taking shortcuts and other people writing directly to the database. In this case some constraints on the database are well worth the effort. However if you have people not using your api to read/write you have to consider where you went wrong in your api design.
It would depend on how you are interacting with the database, IMO. For example, if the only way to the database is through your application, then just do the validation there.
If you are going to allow other applications to update the database, then you may want to put the validation in the database, so that no matter how the data gets in there it gets validated at the lowest level.
But, validation should go on at various levels, to give the user the quickest opportunity possible to know that there is a problem.
You didn't mention which version of SQL Server, but you can look at user defined datatypes and see if that would help you out, as you can just centralize the validation.
I worked for a government agency, and we had a -ton- of business rules. I was one of the DBA's, and we implemented a large number of the business rules in the database; however, we had to keep them pretty simple to avoid Oracle's dreaded 'mutating table' error. Things get complicated very quickly if you want to use triggers to implement business rules which span several tables.
Our decision was to implement business rules in the database where we could because data was coming in through the application -and- through data migration scripts. Keeping the business rules only in the application wouldn't do much good when data needed to be migrated in to the new database.
I'd suggest implementing business rules in the application for the most part, unless you have data being modified elsewhere than in the application. It can be easier to maintain and modify your business rules that way.
One can make a case for:
In the database implement enough to ensure overall data integrity (e.g. in SO this could be every question/answer has at least one revision).
In the boundary between presentation and business logic layer ensure the data makes sense for the business logic (e.g. in SO ensuring markup doesn't contain dangerous tags)
But one can easily make a case for different places in the application layers for every case. Overall philosophy of what the database is there for can affect this (e.g. is the database part of the application as a whole, or is it a shared data repository for many clients).
The only thing I try to avoid is using Triggers in the database, while they can solve legacy problems (if you cannot change the clients...) they are a case of the Action at a Distance anti-pattern.
I think basic data validation like you described makes sure that the data entered is correct. The applications should be validating data, but it doesn't hurt to have the data validated again on the database. Especially if there is more than one way to access the database.
You can reasonable restrict the database so that the data always makes sense. A database will support multiple applications using the same data so some restrictions make sense.
I think the only real cost in doing so would be time. I think such restrictions aren't a big deal unless you are doing something crazy. And, you can change the rules later if needed (although some changes are obviously harder than others)
First ideal: have a "gatekeeper" so that your data's consistency does not depend upon each developer applying the same rules. Simple validation such as range validation may reasonably be implemented in the DB. If it changes at least you have somewhere to put.
Trouble is the "business rules" tend to get much more complex. It can be useful to offload processing to the application tier where OO languages can be better for managing complex logic.
The trick then is to structure the app tier so that the gatekeeper is clear and unduplicated.
In a small organisation (no DBA ergo, small?) I would tend to put the business rules where you have strong development expertise.
This does not exclude doing initial validation in higher levels, for example you might validate all the way up in the UI to help the user get it right, but you don't depend upon that initial validation - you still have the gatekeeper.
If you percentage is always 'part divided by whole' (and you don't save part and whole values elsewhere), then checking its value against [0-100] is appropriate at db level. Additional constraints should be applied at other levels.
If your percentage means some kind of growth, then it may have any kind of values and should not be checked at db level.
It is case by case situation. Usually you should check at db level only constraints, which can never change or have natural limits (like first example).
Richard is right: the question is subjective the way it has been asked here.
Another take is: what are the schools of thought on this? Do they vary by sector or technology?
I've been doing Ruby on Rails for a bit now, and there, even relationships between records (one-to-many etc.) are NOT respected on the DB level, not to mention cascade deleting and all that stuff. Neither are any kind of limits aside from basic data types, which allow the DB to do its work. Your percentage thing is not handled on the DB level but rather at the Data Model level.
So I think that one of the trends that we're seeing lately is to give more power to the app level. You MUST check the data coming in to your server (so somewhere in the presentation level) and you MIGHT check it on the client and you MIGHT check again in the business layer of your app. Why would you want to check it again at the database level?
However: the darndest things do happen and sometimes the DB gets values that are "impossible" reading the business-layer's code. So if you're managing, say, financial data, I'd say to put in every single constraint possible at every level. What do people from different sectors do?
In a single-user desktop application that uses a database for storage, is it necessary to perform the data validation on the database, or is it ok to do it in code? What are the best practices, and if there are none, what are the advantages and disadvantages of each of the two possibilities?
Best practice is both. The database should be responsible for ensuring its own state is valid, and the program should ensure that it doesn't pass rubbish to the database.
The disadvantage is that you have to write more code, and you have a marginal extra runtime overhead - neither of which are usually particularly good reasons not to do it.
The advantage is that the database ensures low-level validity, but the program can help the user to enter valid data much better than by just passing back errors from the database - it can intervene earlier and provide UI hints (e.g. colouring invalid text fields red until they have been completed correctly, etc)
-- edit (more info promoted from comments) --
The smart approach in many cases is to write a data driven validator at each end and use a shared data file (e.g. XML) to drive the validations. If the spec for a validation changes, you only need to edit the description file and both ends of the validation will be updated in sync. (no code change).
You do both.
The best practice for data validation is to sanitize your program's inputs to the database. However, this does not excuse the database of having its own validations. Programming your validations in your code only accounts for deltas produced in your managed environment. It does not account for corrupted databases, administration error, and the remote/future possibility that your database will be used by more than one application, in which case the application-level data validation logic should be duplicated in this new application.
Your database should have its own validation routines. You needn't think of them as cleaning the incoming data as much as it is running sanity checks/constraints/assertions. At no time should a database have invalid data in it. That's the entire point of integrity constraints.
To summarize, you do both of:
Sanitize and validate user inputs before they reach your data store.
Equip your data store with constraints that reinforce your validations.
You should always validate in the code before the data reaches the database.
Data lasts longer than applications. It hangs around for years and years. This is true even if your application doesn't handle data of interest to regulatory authorities or law enforcement agencies, but the range of data which interests those guys keeps increasing.
Also it is still more common for data to be shared between applications with an organisation (reporting, data warehouse, data hub, web services) or exchanged between organisations than it is for one application to share multiple databases. Such exchanges may involve other mechanisms for loading data as well as extracting data besides the front end application which notionally owns the schema.
So, if you only want to code your data validation rules once put them in the database. If you like belt'n'braces put them in the GUI as well.
Wouldn't it be smart to check the data before you try to store it? Database connections and resources are expensive. Try to make sure you have some sort of logic to validate the data before shipping it off to the database. I've seen some people do it on the front end, others on the back end, others even both.
It may be a good idea to create an assembly or validation tier. Validate the data and then ship it over to db.
In the application please!
Its very difficult to translate sqlerror -12345 into a message that means anything to an enduser. In many cases your user may be long gone by the time the database gets hold of the data (e.g. I hit submit then go look to see how many down votes I got in stackoverflow today).
The first prioirity is to validate the data in the application before sending it to the database.
The second priority should be to validate/screen the data at the front end to prevent the user entering invalid data or at least warn them immediatly that the data is inccorrect.
The third priority (if the application is important enough and your budget is big enough) would be for the database itself to verify the correctness of any inserts and updates via constriants and triggers etc.
Is it good practice to delegate data validation entirely to the database engine constraints?
Validating data from the application doesn't prevent invalid insertion from another software (possibly written in another language by another team). Using database constraints you reduce the points where you need to worry about invalid input data.
If you put validation both in database and application, maintenance becomes boring, because you have to update code for who knows how many applications, increasing the probability of human errors.
I just don't see this being done very much, looking at code from free software projects.
Validate at input time. Validate again before you put it in the database. And have database constraints to prevent bad input. And you can bet in spite of all that, bad data will still get into your database, so validate it again when you use it.
It seems like every day some web app gets hacked because they did all their validation in the form or worse, using Javascript, and people found a way to bypass it. You've got to guard against that.
Paranoid? Me? No, just experienced.
It's best to, where possible, have your validation rules specified in your database and use or write a framework that makes those rules bubble up into your front end. ASP.NET Dynamic Data helps with this and there are some commercial libraries out there that make it even easier.
This can be done both for simple input validation (like numbers or dates) and related data like that constrained by foreign keys.
In summary, the idea is to define the rules in one place (the database most the time) and have code in other layers that will enforce those rules.
The disadvantage to leaving the logic to the database is then you increase the load on that particular server. Web and application servers are comparatively easy to scale outward, but a database requires special techniques. As a general rule, it's a good idea to put as much of the computational logic into the application layer and keep the interaction with the database as simple as possible.
With that said, it is possible that your application may not need to worry about such heavy scalability issues. If you are certain that database server load will not be a problem for the foreseeable future, then go ahead and put the constraints on the database. You are quite correct that this improves the organization and simplicity of your system as a whole by keeping validation logic in a central location.
There are other concerns than just SQL injection with input. You should take the most defensive stance possible whenever accepting user input. For example, a user might be able to enter a link to an image into a textbox, which is actually a PHP script that runs something nasty.
If you design your application well, you should not have to laboriously check all input. For example, you could use a Forms API which takes care of most of the work for you, and a database layer which does much the same.
This is a good resource for basic checking of vulnerabilities:
http://ha.ckers.org/xss.html
It's far too late by the time the data gets to your database to provide meaningful validation for your users and applications. You don't want your database doing all the validation since that'll slow things down pretty good, and the database doesn't express the logic as clearly. Similarly, as you grow you'll be writing more application level transactions to complement your database transactions.
I would say it's potentially a bad practice, depending on what happens when the query fails. For example, if your database could throw an error that was intelligently handled by an application, then you might be ok.
On the other hand, if you don't put any validation in your app, you might not have any bad data, but you may have users thinking they entered stuff that doesn't get saved.
Implement as much data validation as you can at the database end without compromising other goals. For example, if speed is an issue, you may want to consider not using foreign keys, etc. Furthermore, some data validation can only be performed on the application side, e.g., ensuring that email addresses have valid domains.
Another disadvantage to doing data validation from the database is that often you dont validate the same way in every case. In fact, it often depends on application logic (user roles), and sometimes you might want to bypass validation altogether (cron jobs and maintenance scripts).
I've found that doing validation in the application, rather than in the database, works well. Of course then, all the interaction needs to go through your application. If you have other applications that work with your data, your application will need to support some sort of API (hopefully REST).
I don't think there is one right answer, it depends on your use.
If you are going to have a very heavily used system, with the potential that the database performance might become a bottleneck, then you might want to move the responsibility for validation to the front-end where it is easier to scale with multiple servers.
If you have multiple applications interacting with the database, then you might not want to replicate and maintain the validation rules across multiple applications, so then the database might be the better place.
You might want a slicker input screen that doesn't just hit the user with validation warnings when they try to save a record, maybe you want to validate a field after data has been entered and it losses focus; or even as the user types, changing the font colour as validation fails/passes.
Also related to constraints, is warnings of suspect data. In my application I have hard-constraints in the database (e.g. someone can't start a job before their date of birth), but then in the front-end have warnings for data that is possibly correct, but suspect (e.g. an eight year-old starting a job).