SpecFlow Integration Testing with Database Patterns - database

I'm attempting to set up SpecFlow for integration/acceptance testing. Our product has a backing database (not a huge one though) in Sqlite.
This is actually proving to be a slightly sticky point though; how do I model the database for the tests?
I would like to know what patterns others out there use for doing integration/acceptance testing with backing databases.
I can think of the following approaches:
Compile a database into the assembly with the tests, then shadow-copy it for each test. Seems slow though.
I could create the database in memory and populate it with pre-determined data.
I could create the database in memory and somehow have Givens populate the database. This seems like it would bloat the tests horribly, but might give them more control and make the tests less fragile.
I could abstract every database interaction and use mocks. Not in love with this idea since I'd like to use this to test the database interactions as well.
Compile the database into the tests and rely on clean-up code to return it to the base state (this one seems dodgy to me). Don't want to do it with transactions since there will be multiple interactions with some tests (so write an item then attempt to read it back with different privileges).

Before considering the How to test, I think you might find it valuable to look at What you want to test.
Starting with what data, I find that it really helps to take a single element, or a small number, and imagine a set of events around them in order to give you the right test data to run your tests with. For example;
If you were working on a healthcare system, you might define a person "Bob" and then produce his life events. Bob was born 37 years ago today, fell off his bike as a child and broke his arm, got married, and has two children.
If you are working on a financial trading system, you might look at a day between opening and closing for a couple of stocks, e.g. "MSFT" and "APPL". On this day you might see one starting low and climbing, the other starting high and falling. A piece of news comes out that reverses their fortunes.
Now you have the what you can actually evaluate which of your scenarios actually work for your data, e.g. “MSFT” and “APPL” could have 1,000s of price changes throughout the day, so generating the Givens and Mocks would be very time consuming. This data lends itself to being pre-captured. On the other hand the “Bob” data works particularly well when using generated data because the data can always change so that it is his birthday today.
One thing your question doesn’t seem to need to consider is updating your data. For example you might want to have a set of tests that work at various stages of your entities life cycle, e.g. Some tests deal with “Baby Bob”, others with “10yr old Bob”,or “Married Bob” etc. If your DB is read only then this isn’t a problem if you can write your tests so that they just don’t see the other data, but sometimes you want build a story through your tests. If your tests do change the data, then you will have problems with ensuring that either your tests run in order (see MSTest OrderedTest or mbUnit DependsOn), or that you can separate your tests so they each deal with an isolated data entity (this is fine if your entity can be described in a single row, but gets harder when you have to read many tables to get it).
You also might want to consider what code you are testing, you can vary the approach inside your different test sets. I currently work on a multi-tier application that has a UI Views, View Models, Client Models, multiple communication systems, and server models. I also have different sets of tests for these. I have some tests that work in a single tier, mocking out other tiers to keep my tests small. Other tests fire up a local server and local client and wire the two up directly. Finally I have some tests that launch a full server process, communicate via EMS and run some simple client side operations using everything but the UI Views.
So now to actually answer your question,
Shadow copy your database - Yes, I’ve done this once with SQLServer Developer and had an xxx.mdb that got copied in before running the tests. However some modern testing frameworks will run tests in parallel e.g. NCrunch, so this just breaks.
Create the database and pre-populate - Not done this one, but my concerns would be what happens where a test changes the database to an unexpected state. Other tests will fail when they have done nothing wrong.
Create the database and use Givens - I’ve done this with NUnit via [SetupFixture]on top of a Linq-to-sql DB.You still have concerns about parallel test runs and you have to balance the granularity of your givens (see StackOverflow-When do BDD scenarios become too specific), and you have the data update ordering/data isolation problem, but this can work really well to allow you to work through your data stories and grow the data throughout your tests. On the other hand, should one test fail and leave the data in a bad state you can end up with lots of failures, but at least you simply need to look at the one that fails first. This kind of testing will also be not play very nicely for developers on their workstation as they can’t just run a single test, particularly with tools such as NCrunch, which can just run tests whose code has changed.
Mock the database This is how I choose to do things now. The trick is that if you are personally following a reasonably strict TDD process where you only test the method you are working on, then you actually end up with some tiers that test the database interaction, e.g. [Test]DALLayerTests.ShouldReadARowAndCreatePOCO(), but most others that used mocked data to test what actually happens e.g. [Test]BusinessObjectPersonTests.ShouldGetBirthdayCongratulations().
Use clean up code - Never tried it, it sounds dodgy :-)

Related

Using specflow for WPF in MVVM

I am trying specflow for the first time for a WPF project with MVVM pattern. So, I have repository, controller, view and view model. Repository makes calls to webservice wich hits the database and gives back the data. View model has methods to validate the input from the user and make calls to repository methods.
In my specflow, should I be making a complete call including the repository methods or should I mock those methods using Moq? Does it make sense?
The short answer:
You could make you calls hit the database, but I would recommend mocking them out, unless you have logic in your DB
The long answer:
There's two competing ways of looking at this question. Firstly what kind of testing are you expecting to use SpecFlow for, and secondly how easy is it going to be to access your database.
If you are using SpecFlow to test your low level technical requirements, then you really are doing unit testing in a Specification by example style. So this really ought to dictate that you use mocking to isolate your units. (Personally I'd stick to using NUnit for these tests.)
If however you are using SpecFlow to test your business scenarios (i.e. Acceptance testing) then your scenarios are reliant on more than one unit to provide the functionality. This is more like Integration or System testing and so many components will need to be involved and in theory since we are testing the whole system, we should include a DB in that, particularly if you have even one stored procedure or view that you might want to regression test later.
However, the second way of looking at this is how much pain is required to have a database available for your testing.
If there is more than one person in your development team, what strategies would you need to avoid failing if you run your tests at the same time?
For example you could generate a new unique customer in each run and only access updates and queries against that customer, then you won't end up with the wrong number of orders.
How will you reset your database after your tests run?
E.g. If you use the right kind of database you can check an empty .mdb into your source control and just rollback/revert to that.
I've personally found that for a simple DB without stored procs and views, mocking is better, but as soon as you've added business logic into the DB, well, you need to test that logic somehow.

How to organize live data integrity tests and code unit tests?

I have several files with code testing code (which uses a "unittest" class).
Later I found it would be nice to test database integrity also. I put this into a separate directory tree. (Things like the keys have correct format, parent and child nodes are pointing correctly and such. Edit: this is a nosql project, where I can not rely on database level checks liek referential integrity and such.)
I use the same unittest class for the integrity tests.
Now I wonder if it makes really sense to keep this separate. To test the integrity of data I often duplicate parts of code that I use to test the code that handles the data.
But it is not the same. The code tests use test databases (that get deleted after each test) and the integrity tests connect to the live data and analyze it. The integrity tests I want to call from cron and send an alarm if something happens in the live database.
How would you handle that? Are there standards for such a setup? What is your experience?
My tendency is to put everything in the same file, which would result in the code tests also being executed by the cron on the production environment.
Edit: What also drives me is to try to keep the project simple and not to have too many files touched by a single task or workflow. Without all the testing i already have a class file, a subclass, a related class, some library (helper) files and the main code. Testing adds one file. It helps me keep my attention focussed while coding, it is less stressing and I believe I make less errors, and I can faster remember and find a certain code part with less files affected. Only one testing file per workflow would help here. If I keep it seperate there are 2 files (data integrity testing and code testing) and maybe 3 (a common library for both). Abstraction would add complexity.
Edit2: I am now refactoring a little bit and only moving the data testing files to the same directory tree where also the code testing lives, but keeping different files with the name indicating "integrity" or "testing". I will not (yet) merge the files, because 2 people recommended against it, and I believe in their experience and advice for now. I will live with code duplication for the moment.
Edit3: I forgot to mention that the selection of the tests per run is not determined by the tree structure in this case. The tests are enumerated in a master file, so I have 2 master files "integrity" and "code testing" at the present, and the test can live in the same directury structure.
Maybe more people will answer. Thank you so far for the valuable input, which is already helping me develop the final structure!
Edit4: I did more refactoring now. It seems I should keep 2 files, but with slightly modified purpose. One targeted for scheduled monitoring on the production server. And another one for development. But in both files can be integrity tests or code tests. And in both files operations can be performed on test databases (that are erased after the test) and on the permanent database (each one has a permanent database, production server and develpment server). And one important thing: I find myself moving lots of common code from the testing files to the class files. So the classes get also abilities that are for testing only. I like this so far, feels clean. I am not (yet) creating a library for testing that is shared between the 2 testing frontends, this code has gone to the class file of the obejct that is being teted for now.
Please note that my comments below are signed with "user89021" but it's me, karlthorwald. I can't do anything about it.
You should separate the database related tests from the "pure" unit tests.
The cost of having two different assemblies is very low considering the benefits - you have one suite of fast, no environment set required tests that you can run on any machine and a slower suite that tests the database integrity that can run only on specific places (e.g. build server).
Another benefit is that you can have two build processes (quick and nightly) that runs different tests suites.
To avoid duplicating code you can create another assembly with the common methods/actions that both test suites needs. Don't worry too much about duplicatimng the actual tests because you're testing different things (either logic or database) so sooner or later your tests will become quite different depending on what you're trying to test.
Your approach of separating them is good.
Your concern about code duplication is 100% valid.
The solution is fairly straightfowrard - abstract away common functionality between the tests - e.g. "RunTest", "AnalyzeResult", "ConnectToDB" - into a common library (you did not specify which language but I assume it has a concept of a library) which can be passed configration details such as which database to connect to.
Then use that library independently from the unit test driver and integrity test driver - which, if you are skilled/lucky enough, might have very little code of its own other than configuration (e.g. which database to connect to, how to report results, and which tests to run).
Similarly, if needed, common inputs/datasets can be placed in common directory
One more answer. You have two types of tests. What I would like to do is address the integrity tests. What you may want to do is include the integrity tests as a function of the production code. IOW, have the integrity as part of the system.
You already mentioned that duplication is an issue and that you are refactoring to remove the duplication. The refactored code of course has development tests?
System Monitoring can be production code. So what ever code you write becomes part of the system.
The nice thing about this is that you evolve your code through your development tests.

Unit testing against large databases

I would like to ask about your suggestions concerning unit testing against large databases.
I want to write unit tests for an application which is mostly implemented in T-SQL so mocking the database is not an option.
The database is quite large (approx. 10GB) so restoring the database after a test run is also practically impossible.
The application's purpose is to manage the handling of applications for credit agreements. There are users in specific roles that change the state of agreement objects and my job is to test part of this process.
I'm considering two approaches:
First Approach
Create agreements that meet specific conditions and then test changes of agreement state (eg. transition from waiting in some office to handled in this specific office). The agreements will be created in application itself and they will be my test cases. All the tests will be in transactions that would be rolled back after performing these tests.
Advantages
The advantage of this approach is quite straightforward test. Expected data could be easily described because I exactly know how the object should look like after the transition.
Disadvantages
The disadvantage is that the database cannot change in a way that will break the test. Users and agreements used in test cases must always look the same and if there will be a need to change the database the preparation process will have to be repeated.
Second Approach
Create agreements in unit tests. Programatically create agreements that would meet specific conditions. The data used for creating agreement will be chosen randomly. Also the users that will change agreement state will be created randomly.
Advantages
The advantage of this approach is ease of making changes to objects and ability to run tests on databases with different data.
Disadvantages
Both objects (agreement and user) have lots of fields and related data and I'm afraid it would take some time to implement creation of these objects (I'm also afraid that these objects may contain some errors because the creation method will be quite hard to implement without errors).
What do you think about these two approaches?
Do any Stack Overflow readers think it is worth the effort create objects as described in second approach?
Does anyone here have any experience creating such tests?
I'm not sure I entirely agree with your assumption that you cannot restore the database after a test run. While I definitely agree that some tests should be run on a full-size, multi-TB database, I don't see why you can't run most of your tests on a much, much smaller test database. Are there constraints that need to be tested like "Cannot be more than a billion identical rows?"
My recommendation would actually be to use a smaller test database for most of your functional specs, and to create-drop all of its tables with each test, with as little sample data as is necessary to test your functionality.
For creating fixture data for tests, you have a few choices:
(a) Create a script that creates an empty database, and then adds a small number of records as the fixture data. This data can be hand-constructed, or a few records from the real database. This is the Rails approach, and pretty common in the Java world.
(b) It's also common to use a "factory" to create this data (some sort of application code). There is an initial investment in building these classes, but once they are built they can be re-used for all your tests. This is now very popular in Ruby/Rails code. (This is your Second Approach above.)
(c) Of course you can use a copy of the "production" data, and try to test against that. But this is probably the hardest approach as you will always be competing against he real world changing the data. And it also tends to be orders of magnitude slower than a small set of fixture data.
There's definitely a cost of getting from state (c) to state (a) or (b)-- but it is an investment in the future. It won't take that long-- even if it takes a whole day, the speed-up in the test running will make up for it quickly.
There's a someone independent issue. After you get your data into the database, and then run your tests, you need to restore it. There are a few common approaches:
(1) rollback the transaction. This is a great way to go-- if practical. Sometimes, though, you actually need to confirm that transactions completed, so this doesn't work.
(2) just re-load a new set of fixture data. If your fixture data is small this is workable. A little slower than (1).
(3) manually undo what the tests have done. This is the most error prone and difficult approach, but possible.
Recommendation?
It sounds like your application is complicated. I'd recommend hand-crafting a small set of data for your tests (a). Keep it separate from your main database so that it's easier to keep track of and reload. Try to rollback transactions, but if that doesn't work for you, you can reload from a script before each test (remember-- the data is small).
The other piece of the puzzle is database migrations, if you don't already have that nailed. These are scripts that use use to evolve your database. If you have these organized and automated, you can apply them to your test/fixture data as well as your production data.
How about testing everything in a transaction and then roll it back? E.g:
BeginTransaction
DoThings
VerifyResult
RollbackTransaction

How should methods updating database tables be unit tested?

I have an application that is database intensive. Most of the applications methods are updating data in a database. Some calls are wrappers to stored procedures while others perform database updates in-code using 3rd party APIs.
What should I be testing in my unit tests? Should I...
Test that each method completes without throwing an exception -or-
Validate the data in the database after each test to make sure the state of data is as expected
My initial thought is #2 but my concern is that I would be writing a bunch of framework code to go along with my unit tests. I read that you shouldn't write a bunch of framework code for unit testing.
Thoughts?
EDIT: What I mean by framework is writing a ton of other code that serves as a library to the unit testing code...not a third party framework.
I do number 2, i.e., test the update by updating a record, and then reading it back out and verifying that the values are the same as the ones you put in. Do both the update and the read in a transaction, and then roll it back, to avoid permanent effect on the database. I don't think of this as testing Framework code, any more than I think of it as testing OS code or networking code... The framework (if you mean a non-application specific Database access layer component) should be tested and validated independently.
There's a third option, which is to use a mock database-access object that knows how to respond to an update as if it had been connected to a live database, but it doesn't really execute the query against a database.
This technique can be used to supplement testing against a live database. This is not the same as testing against a live database, and shouldn't substitute for that kind of testing. But it can be used at least to test that the invocation of the database update by your class was done with proper inputs. It also typically runs a lot faster than running tests against a real database.
You must test the actual effect of the code on the data, and its compliance with the validation rules etc., not just that no exceptions are raised - that would be a bit like just checking a procedure compiles!
It is difficult testing database code that performs inserts, updates or deletes (DML), because the test changes the environment it runs in, i.e. the database. Running the same procedure several times in a row could (and probably should) have different results each time. This is very different to unit testing "pure code", which you can run thousands of times and always get the same result - i.e. "pure code" is deterministic, database code that performs DML is not.
For this reason, you do often need to build a "framework"to support database unit tests - i.e. scripts to set up some test data in the right state, and to clean up after the test has been run.
If you are not writing to the database manually and using a framework instead (jvm, .net framework, ...), you can safely assume that the framework writes to database correctly. What you must test is if you are using the framework correctly.
Just mock the database methods and objects. Test if you are calling them and retrieving data back correctly. Doing this will give you the opportunity to write your tests easier, run them much more faster and make them parallel with no problems at all.
They shouldn't be unit tested at all! The whole point of those methods is to integrate with the outside world (i.e. the database). So, make sure your integration tests beat the you-know-what out of those methods and just forget about the unit tests.
They should be so simple that they are "obviously bug-free", anyway – and if they aren't, you should break them up in one part which has the logic and a dumb part which just takes a value and sticks it in the database.
Remember: the goal is 100% test coverage, not 100% unit test coverage; that includes all of your tests: unit, integration, functional, system, acceptance and manual.
If the update logic is complex then you should do #2.
In practice the only way to really unit test a complex calculation and update
like say, calculating the banking charges on a set of customer transactions,
is to intialise a set of tables to known values at the start of your
unit test and test for the expected values at the end.
I use DBUnit to load the database with data, execute the update-logic and finally read the updated data from the database and verify it. Basically #2.

What's the best strategy for unit-testing database-driven applications?

I work with a lot of web applications that are driven by databases of varying complexity on the backend. Typically, there's an ORM layer separate from the business and presentation logic. This makes unit-testing the business logic fairly straightforward; things can be implemented in discrete modules and any data needed for the test can be faked through object mocking.
But testing the ORM and database itself has always been fraught with problems and compromises.
Over the years, I have tried a few strategies, none of which completely satisfied me.
Load a test database with known data. Run tests against the ORM and confirm that the right data comes back. The disadvantage here is that your test DB has to keep up with any schema changes in the application database, and might get out of sync. It also relies on artificial data, and may not expose bugs that occur due to stupid user input. Finally, if the test database is small, it won't reveal inefficiencies like a missing index. (OK, that last one isn't really what unit testing should be used for, but it doesn't hurt.)
Load a copy of the production database and test against that. The problem here is that you may have no idea what's in the production DB at any given time; your tests may need to be rewritten if data changes over time.
Some people have pointed out that both of these strategies rely on specific data, and a unit test should test only functionality. To that end, I've seen suggested:
Use a mock database server, and check only that the ORM is sending the correct queries in response to a given method call.
What strategies have you used for testing database-driven applications, if any? What has worked the best for you?
I've actually used your first approach with quite some success, but in a slightly different ways that I think would solve some of your problems:
Keep the entire schema and scripts for creating it in source control so that anyone can create the current database schema after a check out. In addition, keep sample data in data files that get loaded by part of the build process. As you discover data that causes errors, add it to your sample data to check that errors don't re-emerge.
Use a continuous integration server to build the database schema, load the sample data, and run tests. This is how we keep our test database in sync (rebuilding it at every test run). Though this requires that the CI server have access and ownership of its own dedicated database instance, I say that having our db schema built 3 times a day has dramatically helped find errors that probably would not have been found till just before delivery (if not later). I can't say that I rebuild the schema before every commit. Does anybody? With this approach you won't have to (well maybe we should, but its not a big deal if someone forgets).
For my group, user input is done at the application level (not db) so this is tested via standard unit tests.
Loading Production Database Copy:
This was the approach that was used at my last job. It was a huge pain cause of a couple of issues:
The copy would get out of date from the production version
Changes would be made to the copy's schema and wouldn't get propagated to the production systems. At this point we'd have diverging schemas. Not fun.
Mocking Database Server:
We also do this at my current job. After every commit we execute unit tests against the application code that have mock db accessors injected. Then three times a day we execute the full db build described above. I definitely recommend both approaches.
I'm always running tests against an in-memory DB (HSQLDB or Derby) for these reasons:
It makes you think which data to keep in your test DB and why. Just hauling your production DB into a test system translates to "I have no idea what I'm doing or why and if something breaks, it wasn't me!!" ;)
It makes sure the database can be recreated with little effort in a new place (for example when we need to replicate a bug from production)
It helps enormously with the quality of the DDL files.
The in-memory DB is loaded with fresh data once the tests start and after most tests, I invoke ROLLBACK to keep it stable. ALWAYS keep the data in the test DB stable! If the data changes all the time, you can't test.
The data is loaded from SQL, a template DB or a dump/backup. I prefer dumps if they are in a readable format because I can put them in VCS. If that doesn't work, I use a CSV file or XML. If I have to load enormous amounts of data ... I don't. You never have to load enormous amounts of data :) Not for unit tests. Performance tests are another issue and different rules apply.
I have been asking this question for a long time, but I think there is no silver bullet for that.
What I currently do is mocking the DAO objects and keeping a in memory representation of a good collection of objects that represent interesting cases of data that could live on the database.
The main problem I see with that approach is that you're covering only the code that interacts with your DAO layer, but never testing the DAO itself, and in my experience I see that a lot of errors happen on that layer as well. I also keep a few unit tests that run against the database (for the sake of using TDD or quick testing locally), but those tests are never run on my continuous integration server, since we don't keep a database for that purpose and I think tests that run on CI server should be self-contained.
Another approach I find very interesting, but not always worth since is a little time consuming, is to create the same schema you use for production on an embedded database that just runs within the unit testing.
Even though there's no question this approach improves your coverage, there are a few drawbacks, since you have to be as close as possible to ANSI SQL to make it work both with your current DBMS and the embedded replacement.
No matter what you think is more relevant for your code, there are a few projects out there that may make it easier, like DbUnit.
Even if there are tools that allow you to mock your database in one way or another (e.g. jOOQ's MockConnection, which can be seen in this answer - disclaimer, I work for jOOQ's vendor), I would advise not to mock larger databases with complex queries.
Even if you just want to integration-test your ORM, beware that an ORM issues a very complex series of queries to your database, that may vary in
syntax
complexity
order (!)
Mocking all that to produce sensible dummy data is quite hard, unless you're actually building a little database inside your mock, which interprets the transmitted SQL statements. Having said so, use a well-known integration-test database that you can easily reset with well-known data, against which you can run your integration tests.
I use the first (running the code against a test database). The only substantive issue I see you raising with this approach is the possibilty of schemas getting out of sync, which I deal with by keeping a version number in my database and making all schema changes via a script which applies the changes for each version increment.
I also make all changes (including to the database schema) against my test environment first, so it ends up being the other way around: After all tests pass, apply the schema updates to the production host. I also keep a separate pair of testing vs. application databases on my development system so that I can verify there that the db upgrade works properly before touching the real production box(es).
I'm using the first approach but a bit different that allows to address the problems you mentioned.
Everything that is needed to run tests for DAOs is in source control. It includes schema and scripts to create the DB (docker is very good for this). If the embedded DB can be used - I use it for speed.
The important difference with the other described approaches is that the data that is required for test is not loaded from SQL scripts or XML files. Everything (except some dictionary data that is effectively constant) is created by application using utility functions/classes.
The main purpose is to make data used by test
very close to the test
explicit (using SQL files for data make it very problematic to see what piece of data is used by what test)
isolate tests from the unrelated changes.
It basically means that these utilities allow to declaratively specify only things essential for the test in test itself and omit irrelevant things.
To give some idea of what it means in practice, consider the test for some DAO which works with Comments to Posts written by Authors. In order to test CRUD operations for such DAO some data should be created in the DB. The test would look like:
#Test
public void savedCommentCanBeRead() {
// Builder is needed to declaratively specify the entity with all attributes relevant
// for this specific test
// Missing attributes are generated with reasonable values
// factory's responsibility is to create entity (and all entities required by it
// in our example Author) in the DB
Post post = factory.create(PostBuilder.post());
Comment comment = CommentBuilder.comment().forPost(post).build();
sut.save(comment);
Comment savedComment = sut.get(comment.getId());
// this checks fields that are directly stored
assertThat(saveComment, fieldwiseEqualTo(comment));
// if there are some fields that are generated during save check them separately
assertThat(saveComment.getGeneratedField(), equalTo(expectedValue));
}
This has several advantages over SQL scripts or XML files with test data:
Maintaining the code is much easier (adding a mandatory column for example in some entity that is referenced in many tests, like Author, does not require to change lots of files/records but only a change in builder and/or factory)
The data required by specific test is described in the test itself and not in some other file. This proximity is very important for test comprehensibility.
Rollback vs Commit
I find it more convenient that tests do commit when they are executed. Firstly, some effects (for example DEFERRED CONSTRAINTS) cannot be checked if commit never happens. Secondly, when a test fails the data can be examined in the DB as it is not reverted by the rollback.
Of cause this has a downside that test may produce a broken data and this will lead to the failures in other tests. To deal with this I try to isolate the tests. In the example above every test may create new Author and all other entities are created related to it so collisions are rare. To deal with the remaining invariants that can be potentially broken but cannot be expressed as a DB level constraint I use some programmatic checks for erroneous conditions that may be run after every single test (and they are run in CI but usually switched off locally for performance reasons).
For JDBC based project (directly or indirectly, e.g. JPA, EJB, ...) you can mockup not the entire database (in such case it would be better to use a test db on a real RDBMS), but only mockup at JDBC level.
Advantage is abstraction which comes with that way, as JDBC data (result set, update count, warning, ...) are the same whatever is the backend: your prod db, a test db, or just some mockup data provided for each test case.
With JDBC connection mocked up for each case there is no need to manage test db (cleanup, only one test at time, reload fixtures, ...). Every mockup connection is isolated and there is no need to clean up. Only minimal required fixtures are provided in each test case to mock up JDBC exchange, which help to avoid complexity of managing a whole test db.
Acolyte is my framework which includes a JDBC driver and utility for this kind of mockup: http://acolyte.eu.org .

Resources