How to do database unit testing? - database

I have heard that when developing application which uses a database you should do database unit testing.
What are the best practices in database unit testing? What are the primary concerns when doing DB unit testing and how to do it "right"?

What are the best practices in database unit testing?
The DbUnit framework (a testing framework allowing to put a database in a know state and to perform assertion against its content) has a page listing database testing best practices that, to my experience, are true.
What are the primary concerns when doing db unit testing
Creating an up to date schema, managing schema changes
Setting up data (reference data, test data) and maintaining test data
Keeping tests independent
Allowing developers to work concurrently
Speed (tests involving database are typically slower and will make your whole build take more time)
and how to do it "right"?
As hinted, follow known good practices and use dedicated tools/frameworks:
Prefer in memory database if possible (for speed)
Use one schema per developer is a must (to allow concurrent work)
Use a "database migration" tool (à la RoR) to manage schema changes and update a schema to the ultimate version
Build or use a test harness allowing to put the database in a known state before each test and to perform asserts against the data after the execution (or to run tests inside a transaction that you rollback at the end of the test).

A list of items that should be reviewed and considered when staring with database unit testing
Each tester needs a separate database, in order to avoid interfering with activities of other tester/developer
To have an easy way of creating a database to be tested (this is related to having a SQL Server database under version control). This is specifically useful when trying to find what went wrong if some tests fail
Focus on specific areas and creating tests for a single module instead of covering all at once. Adding tests granularly is a good way to be efficient
Make sure to provide as many details as possible when a test fails, to allow easier debugging
Use one and the same test data for all tests
If test are implemented using tSQLt framework, the unit testing process could be complicated when dealing with a lot of databases from multiple SQL Server instances.
In order to maintain, execute and manage unit tests directly from SQL Server Management Studio, ApexSQL Unit Test can be used as a solution

Take a look at this link. It goes over some of the basics for creating unit testing stored procs in SQL Server as well as the different types of unit tests and when you should use them. I'm not sure what DBMS you are using but obviously this article is geared towards SQL Server.
Stolen from the article:
Feature Tests
The first and likely most prevalent
class of database unit test is a
feature test. In my mind, feature
tests test the core features—or APIs,
if you will—of your database from the
database consumer's perspective.
Testing a database's programmability
objects is the mainline scenario here.
So, testing all the stored procedures,
functions, and triggers inside your
database constitute feature tests in
my mind. To test a stored procedure,
you would execute the stored procedure
and verify that either the expected
results were returned or the
appropriate behavior occurred.
However, you can test more than just
these types of objects. You can
imagine wanting to ensure that a view,
for example, return the appropriate
calculation from a computed column. As
you can see, the possibilities in this
realm are large.
Schema Tests
One of the most critical aspects of a
database is its schema, and testing to
ensure that it behaves as expected is
another important class of database
unit tests. Here, you will often want
to ensure that a view returns the
expected set of columns of the
appropriate data type in the
appropriate order. You might want to
ensure that your database does, in
fact, contain the 1,000 tables that
you expect.
Security Tests
In today's day and age, the security
of the data that is stored within the
database is critical. Thus, another
important class of database unit tests
are those that test the database
security. Here, you will want to
ensure that particular users exist in
your database and that they are
assigned the appropriate permissions.
You will often want to create negative
tests that attempt to retrieve data
from restricted tables or views and
ensure that the access is
appropriately denied.
Stock-Data Tests
Many databases contain stock data, or
seed data. This data changes
infrequently and is often used as
lookup data for applications or end
users. ZIP codes and their associated
cities and states are great examples
of this kind of data. Therefore, it is
useful to create tests to ensure that
your stock data does, in fact, exist
in your database.

I'm glad you asked about Unit Testing, and not testing in general.
Databases have many features that need to be tested. Some examples:
Data Types/Size/Character sets (try inserting a swedish name, or long urls or numbers from the real worlds, and see if your column definitions are ok)
Triggers
Contraints (foreign keys, uniqueness...)
Views (check that data is correctly included/excluded/transformed)
Stored Procedures
UDFs
Permissions
...
This is useful not only when you change something in your database, but also when you upgrade your dbms, or change something in your settings.
Generally, Integration Testing is done. This means that a Test Suite in a programming language like PHP or Java is created, and the tests issue some queries. But if something fails, or there are some exceptions, it's harder to understand the problem, for 2 reasons:
The problem could be in your PHP code, or in PHP configuration, or in the network, or...
The SQL statements are harder to read and modify, if they are embedded in another programming language.
So, in my opinion, for complex databases you need to use a Unit Testing framework which is written in SQL (using stored procedures and tables). You have to choose it carefully, because that kind of tools is not widely used (and thus not widely tested).
For example, if you use MySQL I know these tools:
STK/Unit http://stk.wikidot.com/stk-unit
utMySQL http://utmysql.sourceforge.net/

I use junit/nunit/etc and code up database unit tests with java or c#. These can then run on an integration server perhaps using a separate schema to the test database.
The latest oracle sql developer comes with a built in unit testing framework. I had a look into this but would NOT use it. It uses a GUI to create and run tests and stores all the tests in the database so not so easy to put test cases under version control. There are probably other testing frameworks out there I imagine they might be specific to your database.
Good practices are similar to regular unit tests:
put the tests under source control
make tests that run fast - don't test
too much at once
make your tests
reproducible

Take a look on DBTestDriven framework. It works great for us. Download it from GitHub or their website.

As for JVM development, unit tests can benefit from JDBC abstraction: as soon as you know which JDBC data are raised by DB access, these JDBC data can be 'replayed'.
Thus DB access case can be 'reproduced' for testing, without the target DB: no test/data isolation complexity, ease continuous integration.
My framework Acolyte is an helpful framework in this way (including studio GUI tool to 'record' DB result): https://github.com/cchantep/acolyte

The application of unit testing allows you to ensure that once you write something, it can be verified, and then when it needs to change, you can verify that all of the previously passed tests will continue to pass. No matter if you are the only database person in your company or if there are 1000 people in your company. No matter if you have one database or 1000 databases.
That will give you confidence that your changes will be more accurate and less likely to break other things that rely on your code. And after all, that will inevitably help you sleep at night, enjoy your vacations more and be more confident in what you develop.
QA engineers control views, triggers in the database testing, and they can create a blank instance of the database to get started with minimal building blocks.
Here is how testers can perform unit testing on databases:
The first step in the process is to create a blank database instance. You can start modifying items and adding new ones until it has everything necessary for your test;
It would be best if we could automate testing procedures to ensure that the database is in a known state before every test run and verify its current condition after each one;
You can also look for problems like missing references that can happen due to accidentally removing or renaming objects, which is often the result of a failed database update;
A test should be conducted to ensure that the database is restored when finished with testing.
Now databases differ significantly from application code – they require heightened precision. They must be tested periodically to avoid breaches of data integrity etc.
What we should remember while performing unit database testing:
Unit tests can be automated, and you can script a set of database operations with the same ease as executing code;
Unit tests are great for testing individual triggers, views, and sprocs. You can test the behavior of each one to make sure that it works just as you want it to;
Unit tests are a fantastic way to create an executable representation of your database testing operations so that you can quickly test and validate new code before rolling it out;
Unit tests can produce consistent results. You will have a clear understanding of what outputs can be expected if everything goes according to plan if every input is mapped as part of the test;
Unit tests should be independent of one another. You will have to manage some setup and teardown, but the test should not have any relationship with other unit tests.
There are many test management and test automation tools that can be helpful while performing unit testing for SQL databases. I use ​​Data Factory, aqua ALM, Mockup Data and DTM Data Test Generator.

Related

What's the better practice for testing code which relies on a DB? Mocks and stubs? Or seeded data?

Seems like forever I've read that, when testing, use a mock database object or repository. No reason to test someone else's DB code, right? No need to have your code actually mess with data in a database, right?
Now lately I see tests which set up a database (possibly in-memory) and seed it with test data, just for running tests against.
Is one approach better than the other? If tests with seeded data are worth running, should one even bother with mock databases connections? If so, why?
There are a lot of ways to test code that interacts with a database.
The repository pattern is one method of creating a facade over the data access code. It makes it easy to stub/mock out the repository during test. This is useful when a piece of business logic needs tested in isolation and dummy values can help test different branches of the code.
Fake databases (in-memory or local files) are less common because there needs to be some "middle-ware" that knows how to read data from a real database and a fake database. It usually just makes sense to have a repository over the whole thing and mock out the repository. This approach is more feasible in some older systems where there is an existing infrastructure. For instance, you use a real database and then switch over to a fake database for test performance reasons.
Another option is using an actual database, populating it with bogus data. This approach is slower and requires writing a lot of scripts. However, this approach is fairly common as part of integration testing. I used to write a lot of "transactional" tests where I used a database transaction to rollback changes after running my tests. I'd write one large test that collectively performed all of my CRUD operations on a particular table.
The last approach makes sense when you are testing the code that converts SQL results into your objects. Your SQL could be invalid (or you use the wrong stored procedure name). It is also easy to forget to check for nulls, perform an invalid cast, etc. when mapping to objects. This code should be tested at some point. An ORM can help alleviate a lot of this testing.
I am typically pretty lazy these days. I use repositories. Most of my data layer code is touched when performing actual integration tests (hitting a real database with dummy data), so I don't bother testing individual database calls (no more transactional tests). I also use ORMs for doing most of my SELECT statements. I think a lot of the industry is moving towards this more lazy approach.
You should use both.
The business services should rely on DAOs, and be tested by mocking the DAOs. This allows for fast, easy to implement, easy to maintain tests.
The DAOs unique responsibility is to contain database access code (queries, etc.), and should also be tested. So you should use a test database, with test data, and check that their queries return/save what they're support to return/save.
I'm not a big fan of using an in-memory database, different from the one used in production. The behavior of some queries, constraints, etc. will be different from database to database, and you'd better be sure that the code will work on the production database, and not in an in-memory database used only by tests.

MSTest unit tests and database access without touching the actual database

In my code I interact with a database (not part of my solution file). The database is owned by a separate team of DBA's, and the code we developers write is only allowed to access stored procs. However we have full view of the database's procs, tables, and columns (it's definition). For my code that is dependent upon data, I currently write unit tests that dumb-up data in the tables (and tear down/remove those rows after the unit test is done), so I can run unit tests to exercise my code that interacts with the DB. All of the code to do this is in the test file (especially in the ClassInitialize() and ClassCleanup() functions). However I've been given some amount of grief from my new coworkers call my style of unit tests "destructive" because I read/write to the dev database inserting and removing rows. At the time we code the unit tests, the database design is generally not stable, so many times we can find issues in the stored proc code before we unleash the QA department on our programs (saves resources). They all tell me there's a way to clone to the database into memory at the time the MSTest unit tests are run, however they don't know how to do it. I've researched around the web and cannot find a way to do what my coworkers need me to do.
Can someone tell me for sure whether or not this can happen in the environment I shown above? If so, can you point me in the right direction?
Do you have SQL scripts that can be used to create your database? You should have, and they should be under version control. If so, then you can do the following:
In your test setup code:
create a 'temporary' database using the SQL scripts. Use a unique name, for example unitTestDatabase_[timestamp].
setup the data you require for your test in the test database. Ideally using public API functions (eg CreateUser, AddNewCustomer), but where the required API does not exist, use SQL commands. Using the API to set up test data makes the tests more robust against changes to the low-level implementation (i.e. database schema). Which is one reason why we write unit tests, to ensure that changes to the implementation do not break functionality.
run your unit tests, using dependency injection to pass the test database connection string from the test code into the code under test.
and in your test teardown code, delete the database. Ideally should be done using your database uninstall scripts, which should also be under version control.
You can control how often you want to create a unit test database: e.g. per test project, test class or test method, or a combination, by creating the database in either an [AssemblyInitialize], [ClassInitialize] or [TestInitialize] method.
This is a technique we use with great success. The advantages are:
every time we run the unit tests, we are testing that our database installation scripts work together with the code.
test isolation, that is the tests only affect their test database. And it doesn't matter if the rollback code goes wrong, you are not touching anyone else's data.
Confidence in the code. That is, because we are using a real database, the unit tests give me more confidence that the code works than if I was mocking the database. Of course, this depends on how good your suite of higher level integration/component tests are.
Disadvantages:
the unit tests are dependant on an external system (the DBMS). You will need to find the name of a DBMS in your test setup code. This can be done by using a config file or by looking at run time for a running local DBMS.
Tests may be slowed down by the database installation scripts. In our experience, the tests are still running quickly enough, and there are plenty of opportunities to optimize. We run our test suite of approx 400 unit tests in approx 1 min, which includes creating 5 separate database on a local installation of SQLServer 2008.
If you can create a 'seam' between the business logic code and your data access layer you should be ok. Use interfaces to represent the contract your DAL exposes to your business logic and then either write your own set of Fake objects or use a mocking tool such as rhino-mocks.
If you are writing tests that hit that database then you have a huge maintenance headache, since as you state, the database is changing, and also it makes it difficult to maintain an environment that has access to the database. What you are actually writing are integration tests, which are still valid, but true unit test's shouldnt have any dependencies on databases, file system, etc.
I would mock out the database, rather than trying to interact with a test instance. This will make your tests faster (so you're more likely to run them).
Assuming you can't do what the others suggested because you're actually testing the stored procedures do what you expect then I think what your colleagues are referring to is using an in-memory database.
When people talk about in-memory databases for testing they're usually referring to SQLite. They build up the database in memory at the start of the test and destroy it at the end. Unfortunately SQLite doesn't support Stored Procedures so that won't help you.
What I would suggest is that you write specific integration tests for the Stored Procedures and insert/remove data as you currently do. Note that it's easier if you wrap the test in a transaction that you then roll back. You could also use the database "unit testing" features in Visual Studio for testing the sprocs if you have that available.
For the rest of your code mock your DAL as #Ben suggested and test your business logic as a normal unit test. However, given the complexity of your DAL being a static class you're going to have to do some work to wrap the DAL and start using the wrapper class throughout your application - a little like how ASP.NET MVC deals with HttpContext.
Even with a bounty, I could not find out if this does exist. I would assume at this point, the people who had told me that this technology does exist might have been mistaken.
Can we not ask DBA to provide backup of DB and Restore it on your local machine and perform test on it.
Backup and restore is fastest way i think.

.NET Unit Tests for Reading/Saving data to database

Most things I read about Unit Tests is about testing your classes and their behaviour. But how do you test saving data to a database and reading data from a database. In our project saving and reading data is done through services that are used by a Flex Application (using WebORB as a gateway). For instance, a service reads all users that have access to a certain module. How do you test that the users that are being returned actually are the users having access to that module?
Sometimes being able to test loading data out of database requires that there's data already in the database. In some of our tests we first need to save a lot of testdata to the database before being able to test reading stuff...
The same thing is valid for Stored Procedures. How do you test sp's if there's no data in the database. Reality is that to test certain stored procedures, we need data in ten tables...
thx, Lieven Cardoen
You can have tests for db actions, but try to avoid it if possible, otherwise:
They will run slower than ordinary tests (more likely they are integration tests)
They require more setup/teardown work (db/schema/table data)
They introduce an external dependency on your test framework
It may also be a code smell that your classes are not separating db related work from other work, e.g. business logic. However it may not, we have a framework test which verifies that the automatically generated SQL script returns the expected incremented identity value after inserting new data, AFAIK there is no way to test that this code is working other than to execute it against the db. You could mock it out or just assume that if the SQL matches what you expect then it's ok, but I don't like that assumption since so much other code relies on it.
Depending on your test framework, you should mark these tests as [Database] related, allowing you to separate them from other tests.
this is more an integration-test than an unit-test.
what i do in such cases is that i build a non-persisting-base test which loads data needed for the tests in a test-db and then runs the unit-tests. after that it disposes the current transaction so no data is stored.
biggest problem here is that if your customer has a failure - you cannot run such tests... another problem is that the data in your test-db will be reseted everytime you run such tests.
I agree with #Gambrinus. In general, it's almost impossible to unit test a data layer; the best you can do is provide a strong data layer interface and mock against that in the business layer, then save data quality tests for your integration testing.
I've seen attempts at mocking ORM tools (this one for LINQ amuses me), but they do not test the correctness of a query, only that the query was written in the way the tester thought it should be written. Since the tester is usually the one writing the query against the ORM, this provides no value whatsoever.
Try using mbunit. It's a .NET testing framework that allows you to fill the database in your setup, and then rollback the changes you did to the database during your tests, restoring the database to its previous condition. There's a quick writeup on it here.
Tests for the code that saves to and reads from databases are called Integration Tests. You can use a data generator to generate test data prior to running integration tests. Integration tests don't have to be run as often as unit tests.
It's funny, I have the same issue on my project. Mocking is probably a good way to go, but I haven't tried that. Generally, we populate our tables with data. I write unit tests that exercise the CRUDL capabilities of a given class. So if I have a Person class, the unit tests inlcude create, read, update, delete and list. These methods tend to call stored procedures (in most cases), so it tests that part of it as well.
There are tools out there that can dump boatloads of test data.
Sql data generator from Red Gate
Let us know what approach worked for you.
A: If you FlexApplication access your database directly it is not easy to test. You should have a testable interface/layer in between.
B: Putting data into the database is normal to have in the "TestSetup-Phase".
C: It should be possible to test an interface who actually triggers the storedprocedure.
if it is sprocs that are not used by GUI but only sql-to-sql, it is also systems "out there" that tests sprocs. normally you have a sp_setup and sp_teardown sproc before and after the actual tests

What's the best strategy for unit-testing database-driven applications?

I work with a lot of web applications that are driven by databases of varying complexity on the backend. Typically, there's an ORM layer separate from the business and presentation logic. This makes unit-testing the business logic fairly straightforward; things can be implemented in discrete modules and any data needed for the test can be faked through object mocking.
But testing the ORM and database itself has always been fraught with problems and compromises.
Over the years, I have tried a few strategies, none of which completely satisfied me.
Load a test database with known data. Run tests against the ORM and confirm that the right data comes back. The disadvantage here is that your test DB has to keep up with any schema changes in the application database, and might get out of sync. It also relies on artificial data, and may not expose bugs that occur due to stupid user input. Finally, if the test database is small, it won't reveal inefficiencies like a missing index. (OK, that last one isn't really what unit testing should be used for, but it doesn't hurt.)
Load a copy of the production database and test against that. The problem here is that you may have no idea what's in the production DB at any given time; your tests may need to be rewritten if data changes over time.
Some people have pointed out that both of these strategies rely on specific data, and a unit test should test only functionality. To that end, I've seen suggested:
Use a mock database server, and check only that the ORM is sending the correct queries in response to a given method call.
What strategies have you used for testing database-driven applications, if any? What has worked the best for you?
I've actually used your first approach with quite some success, but in a slightly different ways that I think would solve some of your problems:
Keep the entire schema and scripts for creating it in source control so that anyone can create the current database schema after a check out. In addition, keep sample data in data files that get loaded by part of the build process. As you discover data that causes errors, add it to your sample data to check that errors don't re-emerge.
Use a continuous integration server to build the database schema, load the sample data, and run tests. This is how we keep our test database in sync (rebuilding it at every test run). Though this requires that the CI server have access and ownership of its own dedicated database instance, I say that having our db schema built 3 times a day has dramatically helped find errors that probably would not have been found till just before delivery (if not later). I can't say that I rebuild the schema before every commit. Does anybody? With this approach you won't have to (well maybe we should, but its not a big deal if someone forgets).
For my group, user input is done at the application level (not db) so this is tested via standard unit tests.
Loading Production Database Copy:
This was the approach that was used at my last job. It was a huge pain cause of a couple of issues:
The copy would get out of date from the production version
Changes would be made to the copy's schema and wouldn't get propagated to the production systems. At this point we'd have diverging schemas. Not fun.
Mocking Database Server:
We also do this at my current job. After every commit we execute unit tests against the application code that have mock db accessors injected. Then three times a day we execute the full db build described above. I definitely recommend both approaches.
I'm always running tests against an in-memory DB (HSQLDB or Derby) for these reasons:
It makes you think which data to keep in your test DB and why. Just hauling your production DB into a test system translates to "I have no idea what I'm doing or why and if something breaks, it wasn't me!!" ;)
It makes sure the database can be recreated with little effort in a new place (for example when we need to replicate a bug from production)
It helps enormously with the quality of the DDL files.
The in-memory DB is loaded with fresh data once the tests start and after most tests, I invoke ROLLBACK to keep it stable. ALWAYS keep the data in the test DB stable! If the data changes all the time, you can't test.
The data is loaded from SQL, a template DB or a dump/backup. I prefer dumps if they are in a readable format because I can put them in VCS. If that doesn't work, I use a CSV file or XML. If I have to load enormous amounts of data ... I don't. You never have to load enormous amounts of data :) Not for unit tests. Performance tests are another issue and different rules apply.
I have been asking this question for a long time, but I think there is no silver bullet for that.
What I currently do is mocking the DAO objects and keeping a in memory representation of a good collection of objects that represent interesting cases of data that could live on the database.
The main problem I see with that approach is that you're covering only the code that interacts with your DAO layer, but never testing the DAO itself, and in my experience I see that a lot of errors happen on that layer as well. I also keep a few unit tests that run against the database (for the sake of using TDD or quick testing locally), but those tests are never run on my continuous integration server, since we don't keep a database for that purpose and I think tests that run on CI server should be self-contained.
Another approach I find very interesting, but not always worth since is a little time consuming, is to create the same schema you use for production on an embedded database that just runs within the unit testing.
Even though there's no question this approach improves your coverage, there are a few drawbacks, since you have to be as close as possible to ANSI SQL to make it work both with your current DBMS and the embedded replacement.
No matter what you think is more relevant for your code, there are a few projects out there that may make it easier, like DbUnit.
Even if there are tools that allow you to mock your database in one way or another (e.g. jOOQ's MockConnection, which can be seen in this answer - disclaimer, I work for jOOQ's vendor), I would advise not to mock larger databases with complex queries.
Even if you just want to integration-test your ORM, beware that an ORM issues a very complex series of queries to your database, that may vary in
syntax
complexity
order (!)
Mocking all that to produce sensible dummy data is quite hard, unless you're actually building a little database inside your mock, which interprets the transmitted SQL statements. Having said so, use a well-known integration-test database that you can easily reset with well-known data, against which you can run your integration tests.
I use the first (running the code against a test database). The only substantive issue I see you raising with this approach is the possibilty of schemas getting out of sync, which I deal with by keeping a version number in my database and making all schema changes via a script which applies the changes for each version increment.
I also make all changes (including to the database schema) against my test environment first, so it ends up being the other way around: After all tests pass, apply the schema updates to the production host. I also keep a separate pair of testing vs. application databases on my development system so that I can verify there that the db upgrade works properly before touching the real production box(es).
I'm using the first approach but a bit different that allows to address the problems you mentioned.
Everything that is needed to run tests for DAOs is in source control. It includes schema and scripts to create the DB (docker is very good for this). If the embedded DB can be used - I use it for speed.
The important difference with the other described approaches is that the data that is required for test is not loaded from SQL scripts or XML files. Everything (except some dictionary data that is effectively constant) is created by application using utility functions/classes.
The main purpose is to make data used by test
very close to the test
explicit (using SQL files for data make it very problematic to see what piece of data is used by what test)
isolate tests from the unrelated changes.
It basically means that these utilities allow to declaratively specify only things essential for the test in test itself and omit irrelevant things.
To give some idea of what it means in practice, consider the test for some DAO which works with Comments to Posts written by Authors. In order to test CRUD operations for such DAO some data should be created in the DB. The test would look like:
#Test
public void savedCommentCanBeRead() {
// Builder is needed to declaratively specify the entity with all attributes relevant
// for this specific test
// Missing attributes are generated with reasonable values
// factory's responsibility is to create entity (and all entities required by it
// in our example Author) in the DB
Post post = factory.create(PostBuilder.post());
Comment comment = CommentBuilder.comment().forPost(post).build();
sut.save(comment);
Comment savedComment = sut.get(comment.getId());
// this checks fields that are directly stored
assertThat(saveComment, fieldwiseEqualTo(comment));
// if there are some fields that are generated during save check them separately
assertThat(saveComment.getGeneratedField(), equalTo(expectedValue));
}
This has several advantages over SQL scripts or XML files with test data:
Maintaining the code is much easier (adding a mandatory column for example in some entity that is referenced in many tests, like Author, does not require to change lots of files/records but only a change in builder and/or factory)
The data required by specific test is described in the test itself and not in some other file. This proximity is very important for test comprehensibility.
Rollback vs Commit
I find it more convenient that tests do commit when they are executed. Firstly, some effects (for example DEFERRED CONSTRAINTS) cannot be checked if commit never happens. Secondly, when a test fails the data can be examined in the DB as it is not reverted by the rollback.
Of cause this has a downside that test may produce a broken data and this will lead to the failures in other tests. To deal with this I try to isolate the tests. In the example above every test may create new Author and all other entities are created related to it so collisions are rare. To deal with the remaining invariants that can be potentially broken but cannot be expressed as a DB level constraint I use some programmatic checks for erroneous conditions that may be run after every single test (and they are run in CI but usually switched off locally for performance reasons).
For JDBC based project (directly or indirectly, e.g. JPA, EJB, ...) you can mockup not the entire database (in such case it would be better to use a test db on a real RDBMS), but only mockup at JDBC level.
Advantage is abstraction which comes with that way, as JDBC data (result set, update count, warning, ...) are the same whatever is the backend: your prod db, a test db, or just some mockup data provided for each test case.
With JDBC connection mocked up for each case there is no need to manage test db (cleanup, only one test at time, reload fixtures, ...). Every mockup connection is isolated and there is no need to clean up. Only minimal required fixtures are provided in each test case to mock up JDBC exchange, which help to avoid complexity of managing a whole test db.
Acolyte is my framework which includes a JDBC driver and utility for this kind of mockup: http://acolyte.eu.org .

In which cases do you test against an In-Memory Database instead of a Development Database?

When do you test against an In-Memory Database vs. a Development Database?
Also, as a related side question, when you do use a Development Database, do you use an Individual Development Database, an Integration Development Database, or both?
Also++, for unit testing, when do you use an In-Memory Database over mocking out your Repository/DAL, etc.?
In memory is an excellent choice for your unit-tests, when the data is easy to seed for your given test cases and a very particular operation is being tested. A real database is better for integration tests, where the data pre-requisites are more complex and there is value to having the base data remain after the tests complete.
For us, the only things we allow in our 'fast' test suite of JUnit tests are those that do not have any external dependencies (database, file, network, etc) so that the suite can be run quickly and efficiently by both developers and continuous integration on checkin. If there is a certain test that absolutely needs to go to the DB, then an in memory one is the only way to go.
A couple points to keep in mind:
Think carefully if you need to use a
database at all in a unit test. It
may be indicative of a poor design
in that the data access layer is
coupled too tightly to the business
logic you are trying to test and
cannot be mocked out.
If using a real database for integration testing, ensure that the tests always restore the data to a pristine state when finished. I've seen a lot of wasted time and failed integration tests because some other test messed up the data.
As for your other question, it really depends on your need. A good rule of thumb is one development database per code branch, since schema changes may be needed that are not relevant to another branch of code. Just having a dedicated development database is important; I'm surprised at how many development teams have to share a database with the QA team, etc. It is important to be able to make changes in a sandboxed environment that does not affect other teams or prevent others from doing their work, so if you've met those requirements you're doing well.
For my team, it's in-memory on developper machine, and the real-database on the continuous integration server.

Resources