How to handle db constraint violations in the user interface? - sql-server

We implement the majority of our business rules in the database, using stored procs.
I can never decide how best to pass data constraint violation errors from the database back to the user interface. The constraints I'm talking about are tied more to business rules than data integrity.
For example, a db error such as "Cannot insert duplicate key row" is the same as the business rule "you can't have more than one Foo with the same name". But we've "implemented" it at the most common sense location: as a unique constraint that throws an exception when the rule is violated.
Other rules such as "You're only allowed 100 Foos per day" do not cause errors per-say, since they're gracefully handled by custom code such as return empty dataset that the application code checks for and passes back to the ui layer.
And therein lies the rub. Our ui code looks like this (this is AJAX.NET webservices code, but any ajax framework will do):
WebService.AddFoo("foo", onComplete, onError); // ajax call to web service
function onComplete(newFooId) {
if(!newFooId) {
alert('You reached your max number of Foos for the day')
return
}
// update ui as normal here
}
function onError(e) {
if(e.get_message().indexOf('duplicate key')) {
alert('A Foo with that name already exists');
return;
}
// REAL error handling code here
}
(As a side note: I notice this is what stackoverflow does when you submit comments too quickly: the server generates a HTTP 500 response and the ui catches it.)
So you see, we are handling business rule violations in two places here, one of which (ie the unique constaint error) is being handled as a special case to the code that is supposed to handle real errors (not business rule violations), since .NET propagates Exceptions all the way up to the onError() handler.
This feels wrong. My options I think are:
catch the 'duplicate key violation' exception at the app server level and convert it to whatever it is the ui expects as the "business rule violated" flag,
preempt the error (say, with a "select name from Foo where name = #Name") and return whatever it is the app server expects as the "business rule violated" flag,
in the same ballpark as 2): leverage the unique constraint built into the db layer and blindly insert into Foo, catching any exceptions and convert it to whatever it is the app server expects as the "business rule violated" flag
blindly insert into Foo (like 3) and let that Exception propagate to the ui, plus have the app server raise business rule violations as real Exceptions (as opposed to 1). This way ALL errors are handled in the ui layer's onError() (or similar) code.
What I like about 2) and 3) is that the business rule violations are "thrown" where they are implemented: in the stored proc. What I don't like about 1) and 3) is I think they involve stupid checks like "if error.IndexOf('duplicate key')", just like what is in the ui layer currently.
Edit: I like 4), but most people say to use Exceptions only in exceptional circumstances.
So, how do you people handle propagating business rule violations up to the ui elegantly?

We don't perform our business logic in the database but we do have all of our validation server-side, with low-level DB CRUD operations separated from higher level business logic and controller code.
What we try to do internally is pass around a validation object with functions like Validation.addError(message,[fieldname]). The various application layers append their validation results on this object and then we call Validation.toJson() to produce a result that looks like this:
{
success:false,
general_message:"You have reached your max number of Foos for the day",
errors:{
last_name:"This field is required",
mrn:"Either SSN or MRN must be entered",
zipcode:"996852 is not in Bernalillo county. Only Bernalillo residents are eligible"
}
}
This can easily be processed client side to display messages related to individual fields as well as general messages.
Regarding constraint violations we use #2, i.e. we check for potential violations before insert/update and append the error to the validation object.

The problem is really one of a limitation in the architecture of your system. By pushing all logic into the database, you need to handle it in two places (as opposed to building a layer of business logic that links the UI with the database. Then again, the minute you have a layer of business logic you lose all the benefits of having logic in stored procs. Not advocating one or the other. The both suck about equally. Or don't suck. Depending on how you look at it.
Where was I?
Right.
I think a combination of 2 and 3 is probably the way to go.
By pre-empting the error you can create a set of procedures that can be called from the UI-facing code to provide detailed implementation-specific feedback to the user. You don't necessarily need to do this with ajax on a field-by-field basis, but you could.
The unique constraints and other rules that are in the database then become the final sanity-check for all data, and can assume that data is good before being sent, and throw Exceptions as a matter of course (the premise being that these procedures should always be called with valid data and therefor invalid data is an Exceptional circumstance).

In defense of #4, SQL Server has a pretty orderly hierarchy of error severity levels predefined. Since as you point out it's well to handle errors where the logic is, I'd be inclined to handle this by convention between the SP and the UI abstraction, rather than adding a bunch of extra coupling. Especially since you can raise errors with both a value and a string.

A stored procedure may use the RAISERROR statement to return error information to the caller. This can be used in a way that permits the user interface to decide how the error will appear, while permitting the stored procedure to provide the details of the error.
RAISERROR can be called with a msg_id, severity and state, and with a set of error arguments. When used this way, a message with the given msg_id must have been entered into the database using the sp_addmessage system stored procedure. This msg_id can be retrieved as the ErrorNumber property in the SqlException that will be raised in the .NET code calling the stored procedure. The user interface can then decide on what sort of message or other indication to display.
The error arguments are substituted into the resulting error message similarly to how the printf statement works in C. However, if you want to just pass the arguments back to the UI so that the UI can decide how to use them, simply make the error messages have no text, just placeholders for the arguments. One message might be '"%s"|%d' to pass back a string argument (in quotes), and a numeric argument. The .NET code could split these apart and use them in the user interface however you like.
RAISERROR can also be used in a TRY CATCH block in the stored procedure. That would allow you to catch the duplicate key error, and replace it with your own error number that means "duplicate key on insert" to your code, and it can include the actual key value(s). Your UI could use this to display "Order number already exists", where "x" was the key value supplied.

I've seen lots of Ajax based applications doing a real-time check on fields such as username (to see if it already exists) as soon as the user leaves the edit box. It seems to me a better approach than leaving to the database to raise an exception based on a db constraint - it is more proactive since you have a real process: get the value, check to see if it is valid, show error if not, allow to continue if no error. So it seems option 2 is a good one.

This is how I do things, though it may not be best for you:
I generally go for the pre-emptive model, though it depends a lot on your application architecture.
For me (in my environment) it makes sense to check for most errors in the middle (business objects) tier. This is where all the other business-specific logic takes place, so I try to keep as much of the rest of my logic here too. I think of the database as somewhere to persist my objects.
When it comes to validation, the easiest errors can be trapped in javascript (formatting, field lengths, etc.), though of course you never assume that those error checks took place. Those errors also get checked in the safer, more controlled world of server-side code.
Business rules (such as "you can only have so many foos per day") get checked in the server-side code, in the business object layer.
Only data rules get checked in the database (referential integrity, unique field constraints, etc.). We pre-empt checking all of these in the middle tier too, to avoid hitting the database unnecessarily.
Thus my database only protects itself against the simple, data-centric rules that it's well equipped to handle; the more variable, business-oriented rules live in the land of objects, rather than the land of records.

Related

Are there ways to perform membership tests in pact? (performing membership test for pact tables)

Hello to the kadena pact developer community
I was looking at some basic code examples and as I wanted to play around with the functionality to develop a better grasp for it, I got curios about the following:
We see that some capabilities as defined in example code test for values within a row inside a table.
Is there a way one could simply test for a key and fail the predicate if the key is not present in the table?
Thank you for your insight.
While this may not be the most efficient way, I have found a solution to the question.
The pact syntax for testing membership is with the function call to 'contains'
Now we want to know whether a key exists within our table. In order to do this we can use the built-in function 'keys'
This will return a list of strings (i.e. our keys) and will let us query via the use of 'contains' whether the key in question exists as a key in our table, or: is X a member in our table.
Since this requires us to get a complete list of keys just to see whether the particular key is within our table, this is where my concern regarding performance comes in.
I wanted to share this with everyone, regardless, but in certain circumstances it may be better to just let the transaction fail instead of enforcing membership explicitly like this.
Edit: I used some code previously to show how to achieve this, but it was faulty code.
If you need a membership test, you can do it within the context of an if statement, but not with (enforce ) as enforce will only allow "pure" expressions (i.e. expressions that can be evaluated on the spot and do not involve database lookups like the 'keys' function).
Enforcing a test outcome that requires database transaction will return an error like
Error from (api.testnet.chainweb.com): : Failure: Database exception:
: Failure: Illegal database access attempt (keys)

Need to establish Database connectivity in DRL file

Need to establish Oracle database connectivity in drools to get some data as and when required while executing the rules. How do I go about that?
You shouldn't do this. Instead, you should query your data out of the database first, then pass it into the rules as facts in working memory.
I tried to write a detailed answer about all the reasons you shouldn't do this, but it turns out that StackOverflow has a character limit. So I'm going to give you the high level reasons.
Latency
Data consistency
Lack of DB access hardening
Extreme design constraints for rules
High maintenance burden
Potential security issues
Going in order ...
Latency. Database queries aren't free. Regardless of how good your connection management is, you will incur overhead every time you make a database call. If you have a solid understanding of the Drools execution lifecycle and how it executes rules, and you design your rules to explicitly only query the database in ways that will minimize the number and quantity of calls, you could consider this an OK risk. A good caching layer wouldn't be amiss. Note that having to properly design your rules this way is not trivial, and you'll incur perpetual overhead in having to make sure all of your rules remain compliant.
(Hint: this means you must never ever call the database from the 'when' clause.)
Data consistency. A database is a shared resource. If you make the same query in two different 'when' clauses, there is no guarantee that you'll get back the same result. Again, you could potentially work around this with a deep understanding of how Drools evaluates and executes rules, and designing your rules appropriately. But the same issues from 'latency' will affect you here -- namely the burden of perpetual maintenance. Further the rule design restrictions -- which are quite strict -- will likely make your other rules and use cases less efficient as well because of the contortions you need to pull to keep your database-dependent rules compatible.
Lack of hardening. The Java code you can write in a DRL function is not the same as the Java code you can write in a Java class. DRL files are parsed as strings and then interpreted and then compiled; many language features are simply not available. (Some examples: try-with-resources, annotations, etc.) This makes properly hardening your database access extremely complicated and in some cases impossible. Libraries which rely on annotations like Spring Data are not available to you for use in your DRL functions. You will need to manage your connection pooling, transaction management, connection management (close everything!), error handling, and so on manually using a subset of the Java language that is roughly equivalent to Java 5.
This is, of course, specific to writing your code to access the database as a function in your DRL. If you instead implement your database access in a service which acts like a database access layer, you can leverage the full JDK and its features and functionality in that external service which you then pass into the rules as an input. But in terms of DRL functions, this point remains a major concern.
Rule design constraints. As I mentioned previously, you need to have an in-depth understanding of how Drools evaluates and executes rules in order to write effective rules that interact with the database. If you're not aware that all left hand sides ("when" clauses) are executed first, then the "matches" ordered by salience, and then the right hand sides ("then" clauses) executed in order sequentially .... well you absolutely should not be trying to do this from the rules. Not only do you as the initial implementor need to understand the rules execution lifecycle, but everyone who comes after you who is going to be maintaining your rules needs to also understand this and continue implementing the rules based on these restrictions. This is your high maintenance burden.
As an example, here are two rules. Let's assume that "DataService" is a properly implemented data access layer with all the necessary connection and transaction management, and it is passed into working memory as a fact.
rule "Record Student Tardiness"
when
$svc: DataService() // data access layer
Tardy( $id: studentId )
$student: Student($tardy: tardyCount) from $svc.getStudentById($id)
then
$student.setTardyCount($tardy + 1)
$svc.save($student)
end
rule "Issue Demerit for Excessive Tardiness"
when
$svc: DataService() // data access layer
Tardy( $id: studentId )
$student: Student(tardyCount > 3) from $svc.getStudentById($id)
then
AdminUtils.issueDemerit($student, "excessive tardiness")
end
If you understand how Drools executes rules, you'll quickly realize the problems with these rules. Namely:
we call getStudentById twice (latency, consistency)
the changes to the student's tardy count are not visible to the second rule
So if our student, Alice, has 3 tardies recorded in the database, and we pass in a new Tardy instance for her, the first rule will hit and her tardy count will increment and be saved (Alice will have 4 tardies in the database.) But the second rule will not hit! Because at the time the matches are calculated, Alice only had 3 tardies, and the "issue demerit" rule only triggers for more than 3. So while she has 4 tardies now, she didn't then.
The solution to the second problem is, of course, to call update to let Drools know to reevaluate all matches with the new data in working memory. This of course exacerbates the first issue -- now we'll be calling getStudentById four times!
Finally the last problem are potential security issues. This really depends on how you implement your queries, but you'll need to be doubly sure you're not accidentally exposing any connection configuration (URL, credentials) in your DRLs, and that you've properly sanitized all query inputs to protect yourself against SQL injection.
The right way to do this, of course, is not to do it at all. Call the database first, then pass it to your rules.
As an example, let's say we have a set of rules which is designed to determine if a customer purchase is "suspicious" by comparing it to trends from the previous 3 months' worth of purchases.
// Assume this class serves as our data access layer and does proper connection,
// transaction management. It might be something like a Spring Data JPA repository,
// or something from another library; the specifics are not relevant.
private PurchaseService purchaseService;
public boolean isSuspiciousPurchase(Purchase purchase) {
List<Purchase> previous = purchaseService.getPurchasesForCustomerAfterDate(
purchase.getCustomerId(),
LocalDate.now().minusMonths(3));
KieBase kBase = ...;
KieSession session = kBase.newKieSession();
session.insert(purchase);
session.insert(previous);
// insert other facts as needed
session.fireAllRules();
// ...
}
As you can see, we call the database and pass the result into working memory. Then we can write the rules such that they do work against that existing list, without needing to interact with the database at all.
If our use case requires modifying the database -- eg saving updates -- we can pass those commands back to the caller and they can be invoked after the fireAllRules is completed. Not only will that keep us from having to interact with the database in the rules, but it'll give us better control over our transaction management (you can probably group the updates into a single transaction, even if the originally came from multiple rules). And since we don't need to understand anything about how Drools evaluates and executes rules, it'll be a little more robust in case a rule with a database "update" is triggered twice.
You can use function like below to get details from DB. Here I have written function in DRL file but its suggested to add such code in java file and call specific method from DRL file.
function String ConnectDB(String ConnectionClass,String url,String user, String password) {
Class.forName(ConnectionClass);
java.sql.Connection con = DriverManager.getConnection(url, user, password);
Statement st = con.createStatement();
ResultSet rs = st.executeQuery("select * from Employee where employee_id=199");
rs.first();
return rs.getString("employee_name");
}
rule "DBConnection"
when
person:PersonPojo(name == ConnectDB("com.mysql.jdbc.Driver","jdbc:mysql://localhost:3306/root","root","redhat1!"))
.. ..
then
. . ..
end

How to handle NOT NULL SQL Server columns in Access forms elegantly?

I have an MS Access front-end linked to a SQL Server database.
If some column is required, then the natural thing to do is to include NOT NULL in that column's definition (at the database level). But that seems to create problems on the Access side. When you bind a form to that table, the field bound to that column ends up being pretty un-user-friendly. If the user erases the text from that field, they will not be able to leave the field until they enter something. Each time they try to leave the field while it's blank, they will get this error:
You tried to assign the Null value to a variable that is not a Variant data type.
That's a really terrible error message - even for a developer, let alone the poor user. Luckily, I can silence it or replace it with a better message with some code like this:
Private Sub Form_Error(DataErr As Integer, Response As Integer)
If DataErr = 3162 Then
Response = acDataErrContinue
<check which field is blank>
MsgBox "<some useful message>"
End If
End Sub
But that's only a partial fix. Why shouldn't the user be able to leave the field? No decent modern UI restricts focus like that (think web sites, phone apps, desktop programs - anything, really). How can we get around this behavior of Access with regard to required fields?
I will post the two workarounds I have found as an answer, but I am hoping there are better ways that I have overlooked.
Rather than changing backend table definitions or trying to "trick" Access with out-of-sync linked table definitions, instead just change the control(s) for any "NOT NULL" column from a bound to an unbound field (i.e. Clear the ControlSource property and change the control name--by adding a prefix for example--to avoid annoying collisions with the underlying field name.).
This solution will definitely be less "brittle", but it will require you to manually add binding code to a number of other Form events. To provide a consistent experience as other Access controls and forms, I would at least implement Form_AfterInsert(), Form_AfterUpdate(), Form_BeforeInsert(), Form_BeforeUpdate(), Form_Current(), Form_Error(), Form_Undo().
P.S. Although I do not recall seeing such a poorly-worded error message before, the overall behavior described is identical for an Access table column with Required = True, which is the Access UI equivalent of NOT NULL column criteria.
I would suggest if you can simply change all tables on sql server to allow nulls for those text columns. For bit, number columns default them to 0 sql server side. While our industry tends to suggest to avoid nulls, and many a developer ALSO wants to avoid nulls, so they un-check the allow nulls SQL server side. The problem is you can never run away and avoid tons of nulls anyway. Take a simple query of say customers and their last invoice number + invoice total. But of course VERY common would be to include customers that not bought anything in that list (customers without ivoices yet, or customers without any of a gazillion possible cases where the child record(s) don't yet exist. I find about 80% or MORE of my quires in a typical application are LEFT joins. So that means any parent record without child records will return ALL OF those child columns as null. You going to work with, and see, and HAVE to deal with tons and tons of nulls in a application EVEN if you table designs NEVER allow nulls. You cannot avoid them - you simply cannot run away from those nasty nulls.
Since one will see lots of nulls in code and any sql query (those VERY common left joins), then by far and away the best solution is to simply allow and set all text columns as allowing nulls. I can also much state that if an application designer does not put their foot down and make a strong choice to ALWAYS use nulls, then the creeping in of both NULLS and ZLS data is a much worse issue to deal with.
The problem and issue becomes very nasty and painful if one does not have control or one cannot make this choice.
At the end of the day, Access simply does not work with SQL server and the choice of allowing ZLS columns.
For a migration to sql server (and I been doing them for 10+ years), it is without question that going will nulls for all text columns is by far and away the most easy choice here.
So I recommend that you not attempt to code around this issue but simply change all your sql tables to default to and allow nulls for empty columns.
The result of above may require some minor modifications to the application, but the pain and effort is going to be far less then attempting to fix or code around Access poor support (actually non support) of ZLS columns when working with SQL server.
I will also note that this suggesting is not a great suggestion, but it is simply the best suggestion given the limitations of how Access works with SQL server. Some database systems (oracle) do have a overall setting that says every null is to be converted to ZLS and thus you don't have to care about say this:
select * from tblCustomers where (City is null) or (City is = "")
As above shows, the instant you allow both ZLS and nulls into your application is the SAME instant that you created a huge monster mess. And the scholarly debate about nulls being un-defined is simply a debate for another day.
If you are developing with Access + SQL server, then one needs to adopt a standard approach - I recommend that approach simply is that all text columns are set to allows nulls, and date columns. For numbers and bit columns, default them to 0.
This comes down to which is less pain and work.
Either attempet some MAJOR modifications to the application and say un-bind text columns (that can be a huge amount of work).
Or
Simply assume and set all text columns to allow nulls. It is the lessor of a evil in this case, and one has to conform to the bag of tools that has been handed to you.
So I don't have a workaround, but only a path and course to take that will result in the least amount of work and pain. That least pain road is to go with allowing nulls. This suggestion will only work of course if one can make that choice.
The two workarounds I have come up with are:
Don't make the database column NOT NULL and rely exclusively on Access forms for data integrity rather than the database. Readers of that table will be burdened with an ambiguous column that will not contain nulls in practice (as long as the form-validation code is sound) but could contain nulls in theory due to the way the column is defined within the database. Not having that 100% guarantee is bothersome but may be good enough in reality.
Verdict: easy but sloppy - proceed with caution
Abuse the fact that Access' links to external tables have to be refreshed manually. Make the column NULL in SQL Server, refresh the link in Access, and then make the column NOT NULL again in SQL Server - but this time, don't refresh the link in Access.
The result is that Access won't realize the field is NOT NULL and, therefore, will leave the user alone. They can move about the form as desired without getting cryptic error 3162 or having their focus restricted. If they try to save the form while the field is still blank, they will get an ODBC error stemming from the underlying database. Although that's not desirable, it can be avoided by checking for blank fields in Form_BeforeUpdate() and providing the user with an intelligible error message instead.
Verdict: better for data integrity but also more of a pain to maintain, sort of hacky/astonishing, and brittle in that if someone refreshes the table link, the dreaded error / focus restriction will return - then again, that worst-case scenario isn't catastrophic because the consequence is merely user annoyance, not data-integrity problems or the application breaking

Best practices for handling unique constraint violation at UI level

While working in my application i came across a situation in which there are likely chances to Unque Constraints Violation.I have following options
Catch the exception and throw it back to UI
At UI check for the exception and show approrpriate Error Message
This is something different idea is to Check in advance about the existance of the given Unique value before starting the whole operation.
My Question is what might be the best practice to handle such situation.Currently we are using combo of Struts2+Spring 3.x+Hibernate 3.x
Thanks in advance
edit
In case we decide to let database give tha final verdict and we will handle the exception and propagte that exception to UI and will show Message as per the Exception.What you suggest should be propagate the same exception (org.hibernate.exception.ConstraintViolationException) to UI layer or should we create a seperate exception class for this since propagating Hibernate exception to UI means polluting the UI classes with Hibernate specific imports and other things
The best way to answer this question is to split it into two ideas.
1) Where is the unique constraint ultimately enforced? In this case (from your question) the answer is the database.
2) How can we make the user experience better by checking the constraint in other places?
Because the database will ultimately make the decision within a transaction, there is no useful check you can make ahead of time. Even if you check before inserting, it is possible (though usually highly unlikely) that another user inserts that value in time between the check and the actual insert.
So let the database decide and bubble the error back up to the UI.
Note that this is not always true for all constraints. When checking foreign keys for small tables (such as a table of US States or Countries or Provinces), the UI provides the user with a selection list, which forces the user to pick an allowed value. In this case the UI really is enforcing the constraint. Though of course even in that case the database must make the final enforcement, to protect against malicious hand-crafted requests to the web layer that are trying deliberately to put in invalid values.
So, for some constraints, yes, let the UI help. But for unique constraints, the UI really cannot help because the database is the final authority, and there is no useful check you can make that you can guarantee will still be true when you make the insert.
Depends on the UI and if the user can do anything about it, as well as what else is going on in the system. I usually check before attempting an insert, especially if there's any sort of transactional logic, or other inserts happening after this one. If there's nothing like that, and the user can just pick a different number to put in, then catching the exception and displaying an error message might be just fine.

Should I check for DB constraints in code or should I catch exceptions thrown by DB

I have an application that saves data into a table called Jobs. The Jobs table has a column called Name which has a UNIQUE constraint. The Name column is not PRIMARY KEY. I wonder if I should check for duplicate entries myself before I try to save/update a new entry or if it's better to wait for an exception thrown by the data access layer. I'm using NHibernate for this App if it's of any importance
Thanks to everybody for the great input.
I have found one more reason why I should validate in code and not just wait for an exception being thrown (and caught by my code). It seems that NHibernate will only throw an NHibernate.Exceptions.GenericADOException which is not very informative regarding the cause of the exception in this case. Or am I missing an aspect of NHibernate here?
The answer is: both.
If your database has constraints it can guarantee certain invariants about the data, such as uniqueness. This helps in several ways:
If you have a bug in your
application, violating the
constraint will flag something that
might otherwise not be noticed.
Other users of the database can
assume more about the behaviour of
the data as the DBMS enforces
invariants.
The database protects itself from
incorrect updates that violate the
constraints. If you find you have some other
system or interface populating the
database down the track, the
constraints enforced by the database
mean that anything caught by the
constraints won't (or at least
is less likely to) break your system.
Applications and databases live in a M:M relationship in any but the most trivial cases. The application should still have the appropriate data and business rule validations but you should still not plan for your application being the only customer of the data. Work in data warehousing for a few years and you'll see the effects of applications designed by people with this mindset.
If your design is good (both database and BL), the database shouldn't have any constraints that wouldn't be dealt with in the BL - i.e. you shouldn't be presenting the database with inconsistent data. But nothing is perfect.
I've found that confining the database to data consistency constraints lets me handle all BL validation in procedural code, and the only cases where I experience database exceptions are design and coding errors which can (and should be) fixed.
In your case, checking the name for uniqueness is data content validation, properly handled in code. Which presumably catches the error nearest the point of commission, where you hopefully have friendlier UI resources to call on without introducing undesirable coupling between abstractions.
I would leave that work entirely to the database; your code should focus on catching and properly handling the exception.
Reasons:
Performance- The database will be
highly optimized to enforce
constraints in a fast and efficient
way. You won't have time to
optimize your code as well.
Maintainability- If the constraints
change in the future, you won't have
to modify your code, or perhaps you
will just have to add a new catch{}.
If a constraint is dropped, you
won't have to touch your code at
all.
If you are going to check the constraints yourself, do it in the data access layer. Nothing above that layer should know anything about your database or its constraints.
In most cases I'd say leave it to the DAL to catch DB-originated exceptions. But in your specific case, I think we're talking about basic input validation. I'd opt for a name availability check call to the database, before submitting the whole form.
You should definitely check for any exception thrown by the data access layer. The problem with checking if there is a record with the same value is, that it requires you to lock the table for modifications until you insert the new record to prevent race conditions.
It is generally advisable to check for exceptions/errors, even if you have checked everything yourself before. There is almost always something that can go wrong or which you haven't considered in your code but is enforced by the database.
Edit: If I understand the question right, it is not about if the constraint should be enforced by the database or not, but how to deal with it in the application code. Of course you should always set up all constraints in the database to prevent bad data entering your database.
The question that you need to answer is:
"Do I need to present the user with nice messages". Example: There is already a Job with the name TestJob1.
If the answer is No, just catch the error and present a common message
If the answer is Yes, keep reading
If you catch the error after the insert there isn't enough information to present the right message (at least in an agnostic DB way)
On the other hand, there can be race conditions and you can have simultaneous transaction trying to insert the same data, therefore you need the DB constraint
An approach that works well is:
check before to present a nice
message
catch the exception and
present a common error message
(assuming this won't happen very
frequently)
Personally I'd catch the exception. It's much simpler and requires much less code.
The inner exception of the GenericADOException will tell you why the database action failed. You can catch the OracleException / MSSQLException / [InsertCustomExceptionHere] and handle the error from that message. If you want to pass this back up to the front end (assuming the user is the one who entered duplicate data) you might want to wrap it in a custom exception first so you don't couple your front end to your database. You don't really want to be passing RDBMS specific exceptions around.
I disagree with checking the db for uniqueness before doing an insert, round tripping to the database twice isn't very efficient and certainly isn't scalable if you have a high volume of user traffic.

Resources