Why isn't my database cluttered with artifacts from my Spring DBUnit tests? - database

I am currently testing my Spring repositories and decided to use a MariaDB server instance instead of an in-memory Derby instance because of some complications in a test that involved a database view.
While the tests eventually did succeed without errors and failures, I noticed that I didn't add a #DatabaseTeardown annotation to my test case. So I decided to check my database for unwanted rows leftover from the test and found that my database was just as empty as before the test.
Could someone here explain why this is happening?

As you said that you use #Transactional on your test cases, its default behavior is tests is to enclose entire test in a transaction and automatically rollback it, thus making sure your database is in the same state as it was before the test cases.
Check this StackOverflow answer - https://stackoverflow.com/a/9817815/1589165
This is documented here in Spring docs - http://docs.spring.io/spring/docs/current/spring-framework-reference/html/integration-testing.html#testcontext-tx

Related

Test Rest Api and verify using db access, pros and cons

I'm testing REST API end to end and when I want to check the application behaved.
One is using existing REST API where possible but in other cases I don't have REST API available so have two options:
creating an api for tests purposes or checking in the database the data have changed as expected.
Which one is better and why? Is there any harm in doing db calls from your tests?
Since you want to test the API itself (E2E testing) I would probably go with the route of creating a test DB and working on that. If you are performing this manually you can easily, albeit cumbersomely, reset the DB after each test.
You can also, and I would probably go this route, create a Docker container with a ready to use DB that you would update as needed, along with the development of the API itself across time. You could spawn a new container/DB every time you need to perform testing and this would also ease automation and integration in CI/CD pipelines.
Any of these solutions enable you to test the real API, in a real DB. Providing more accurate test results.
Hope this helps!
Cheers!
In e2e tests or acceptance tests you usually only check the observable behavior, so the behavior the user would be able to see, in your case the user of the API, so in theory you should not need to check the database.
If you really want to check the results in the database I would use a project like xmysql https://github.com/o1lab/xmysql to have an Rest API that can be used to check the DB. Reasons are:
you don't have to maintain your own code that serves API endpoints that are not needed in production. That code might get public and might become a security issue
you don't need to mock any database or calls, you can test what you fly, and fly what you test
its easy to test if the system where you run the tests and the system-under-test are different computers. You can test all remotely without needing to open ports to you DB. e.g. its easy to use docker to set it all up, run the tests and then deploy. The only thing the deployment does not have is xmysql

Magento and Selenium RC testing with database access

Selenium is frontend test framework, but what if test case touches database e.g. customer registration workflow?
I suppose that fixturing is necessary. Any clues on how to autoload Mage::app() to selenium rc test cases?
Might be also good idea to create separate database for unit test *magento_unit_tests* like it is made in EcomDev_PHPUnit unit test framework?
Any other ideas are welcome.
Usually these types of tests are run against a clone of the site with a separate database. You will want to make sure that you backup your database before running tests, this way you can always restore to a known state no matter what type of changes the tests make.

Integration tests in Continuous Integration environment: Database and filesystem state

I'm trying to implement automated integration tests for my application. It's a very complex monster. You could say that its database and part of the filesystem are part of its state, because it saves image files in the hard drive, and references to those in the DB. The software needs all those, in a coherent state, to work properly.
Back to writing tests: To run any relevant test, I need some image files in the filesystem, and certain records filled in the database. I thought of putting all of these in a separate folder called TestEnvironmentData in the repository, and retrieving them from the Continuous Integration Server (Team City), but a colleague said the repo is quite full as it is, and that I should set up a special directory, and databases, only in the Continuous Integration server. I don't like that because the tests success depend on me manually mantaining stuff in the server, and restoring initial state before every test becomes cumbersome.
What do you guys do when you need to write integration tests for an app like this? The main goal is having an automated test harness to approach a large scale refactoring. There's lots of spaghetti code and the app's current architecture is hardly unit testable, that's why I decided on integration tests first.
Any alternative approach is welcome.
Developer Repeatability is key when setting up a Continous Integrations Server. I have set one up for my last three employers and I have found the key to success is the developers being able to run the same tests from their dev system in order to get the same results as the CI Server.
The easiest way to do this would be to check in the test artifacts into source control but you could also use dropbox or a Network Share that you copy them from in one of the build steps.
For a .Net solution I have always used MsBuild as you can most easily replicate the build process of Visual Studio and get the same binaries/deployables. As for keeping your database in sync so that tests can be repeatable in the past I used the MbUnit test framework and the [Rollback] attribute as it would roll back any changes to Sql Server that happened in the test. I believe that Nunit now has this attribute as well.
The CI server is great for finding code that breaks existing functionality but unless developers can reproduce the error on their machine they won't trust the CI server for some time.
First of all, we use Maven to build our code. It's like ant, but it relies on convention instead of configuration for many things, like Ruby On Rails does. One of those conventions is a standardized directory structure:
(project)----src----main----(language)
| | \--resources
| \--test----(language)
| \--resources
\--target---...
Using a directory structure like this makes it easy to keep your application resources and testing resources near each other, yet still be able to build for test or build for production, or just build both but just package up the application parts after running the tests.
As far as resetting the database between tests, how you do that is greatly dependent on the DBMS you're using. For instance, if you're using MySQL it's very easy to get the test data the way you want and do a mysqldump to a file you then load before the test. With other DBMSs you may have to drop and recreate the tables and reload the data, or make separate tables for the starting point and use a CREATE/SELECT sql statement to duplicate it each time.
There really is no reliable way around the "reset the database between tests" step.

Spring JPA: Testing DAO layer with multiple databases in a CI environment

We are working on a project where database requirement is not clear. So we are building a database agnostic application.
See my previous question here: Database Agnostic Application
Now I want to test my Spring application DAO with multiple database. I've written number of test cases using TestNG and DBUnit.
When I run these test in a CI environment, I want them to test the application against all the configured databases. I've installed the databases on the 'test server'.
e.g. I want something like this:
for ( each database configured ) {
run each dao test
}
Not sure what is the best way of doing this? And help is welcome.
Thanks,
Adi
If you want to be database independent, you have to test against every single database system you want to support. There are very fine differences which leak through Hibernate.
What I did in the past was to make the test retrieve their database configuration through some System Property. Typically by using hibernate_.property instead of the default hibernate.property. Then setup CI Jobs, which set the property to different values and provide one hibernate_xxx.property for every database to test against. I did this using JUnit Rules, to have the logic in one place. Don't know the apropriate tool for TestNG
I'm not to fond of the loop construct you are hinting at, because it might make it difficult to run a test suit against a single specific database.
I'm also not to fond of dbunit, because it seems to make maintaining testdata rather painful. I prefer in most cases a handcrafted DSL. Have a look at some articles I wrote about it:
http://blog.schauderhaft.de/2011/03/13/testing-databases-with-junit-and-hibernate-part-1-one-to-rule-them/
http://blog.schauderhaft.de/2011/03/20/testing-databases-with-junit-and-hibernate-part-2-the-mother-of-all-things/
http://blog.schauderhaft.de/2011/03/27/testing-databases-with-junit-and-hibernate-part-3-cleaning-up-and-further-ideas/
If you're building a database agnostic application and not using any of the inherent features of a specific database vendor, then the scope of your test cases should be to test the setup, manipulation, and accessing of the data through the DAO objects and less with the testing of the actual database backend. Hibernate 3.5 has dialects available for both Oracle 11g and DB2, so if you were writing test cases that tested the integration of the database agnostic application with a specific database vendor, then really what you are doing is testing that the hibernate dialects do as they say they do (which I'm sure has been covered by test cases in the hibernate project).
In other words, in your case I would think that the testing should focus more on the DAO retrieving the data that you think that it will retrieve after you've set that data up, and in-memory databases are fine for that.
Now all that said, both DB2 and Oracle have very good documentation related to setup. Indeed, both of them have "wizards" to do that. If you still think that it's prudent to test adding data to the database and retrieving it from the physical, non-in-memory database, then I would recommend setting up a "test database" environment and pointing your datasource to that during your continuous integration tests. If you're using Hudson or Jenkins for CI, you can set it up to run a script after the build completes that will truncate the database tables so that the next round of tests work from a blank slate.
EDIT:
I just saw the updates that you posted to your question, so let me address them. Since you already have the databases setup and configured then what you really want to do is dynamically select what the database should be. One way to do this would be to setup your datasource using System properties that can be inherited from a properties file, and running your tests in a "DB2-test" environment and an "Oracle-test" environment. Using this method, you'll have to setup the datasource programmatically and have it read system environment variables to determine which database it connects to. This would essentially require you to change your CI script to run the DB2-test environment first, then the Oracle-test environment following that -- your test suites will run twice.
Hope this helps!
Unit 4.9 has a new Feature: TestRule
You should be able to write a rule, that repeat a test for different databases.
There is this stack overflow question: How to Re-run failed JUnit tests immediately?
It is a slightly different question, but the solution should be the same technique.

testing django app with legacy database - how to avoid recreating db per test?

I'm building web-application using Django1.1 framework with imposed database schema and data (in fact - db already exists - Postgresql). I wrote models already, now I want to perform some unit-testing.
The problem: test runner destroys and reconstructs (using information from models) database after every test method, but that's undesirable. I'd like to preserve at least schema all the time, data cleaning is acceptable. Is there a good way to obtain this behaviour?
(one solution is to use pure unittest module, setting/cleaning everything manually, but that's unsatisfactory)
After some re-googling (first attempt was several weeks ago and just couldn't find this, because it appeared a month ago) I've found this topic, which leads me to django-test-utils; persistent database test runner (e.g. python manage.py quicktest) solves my case (in addition, it seems to be good app in general). In addition, I had to tweak TEST_DATABASE_NAME option in settings.py to my main database to fit my needs.
This doesn't give you the behavior you're asking for, just a potential alternate behavior:
While I deploy and run integration tests against my actual legacy database, I run unit tests against a SQLite database. It's a small configuration change switching the DB engine to get it working. It ends up being faster and avoids clobbering any other work I'm doing.

Resources