How to safely unit-test write operations in Symfony 2? - database

I want to create tests for all my CRUD's. But how do I set a separate database for them? Is that the best way to go?
This is another question, but it is related: Should I run the tests in the production server too? Sometimes things can go wrong in different enviroments, so I guess I should. But then I need the mentioned separate database, right?
Any advice?

Running any kind of tests on a production server is generally a bad idea (unless it's just the production hardware that hasn't been commisioned yet).
A Unit Test does not hit the database (or any other external system). So, in order to create a unit test you need to remove the dependency on the database.
What you are calling a 'unit test' is probably an integration test. Any test that utilises an external system (such as a database, file system etc.) is an integration test.
Two common solutions to your problem are:
At the start of your test, restore a database backup containing
known data to a separate test database, then perform your tests
against it.
Using a 'fixed' known test database, at the start of each test start
a transaction, perform the test and then rollback the transaction to
leave the database in the same known state.
(No. 1 is often preferable, as the database in (2) can become 'polluted').

I agree with Mitch. I would add that you should decide whether you want to do an integration test or a unit test (or both, but not in the same test). If, in fact, you do want to do a unit test, realize:
Your code has a "dependency" on an external database.
When unit testing you'll have to find a way to "fake" the database. You want to test a "unit" which means a single thing, not two or more things (i.e. your CRUD code AND your connection to a database AND the database itself).
Typically you'll need to refactor your code using something like dependency injection so that when unit testing you can fake things that your code depends on.
Unit testing isn't just testing your code. That's actually the easy part. The harder part is making your code testable.
I recommend going to http://artofunittesting.com/ and watching the free videos on the right side under the heading "Unit Testing Videos". Forget the fact that he's working in .NET as it's the principles that are important.
Then watch the GoogleTechTalks by Misko Hevery where he explains why you want to do dependency injection.
Design Tech Talk Series Presents: OO Design for Testability
The Clean Code Talks -- Unit Testing
(He has more too. There is a series of six GoogleTechTalks.)

I had a similar problem today and I think I've found a good solution.
Make a copy of your database (creating a new empty database works as well).
Edit your config_test.yml to change the database name.
A sample of my test configuration (might be different depending if you have multiple db-s etc)
doctrine:
dbal:
dbname: test_db
Update your database to reflect the entities in your application by calling php app/console doctrine:schema:update --force --env=test (required, if you just created a new db as well as every time you change your application model).
Your application should now use the test database during unit tests. NB! Be sure to make a backup of your database before messing around with the live database.
However, as clearly mentioned before, these are not Unit Tests anymore and instead an integration tests.

Related

How to run NUnit onetimesetup multiple times for different databases

We have an integration test suite that contains numerous tests executing repository classes.
The objective is to have a [OneTimeSetup] method in the BaseTestFixture, that will create/populate each target database (Postgres/SQL Server) only once before all tests and teardown after all tests.
Got this error:
nunit OneTimeSetUp: SetUp and TearDown methods must not have parameters
How can we run the entire test suite against Postgres, SQL server and both without duplicating tests?
Thanks.
Interesting question. I can't really think of an 'out-the-box' solution myself
One simple workaround would be to do two separate console runs and use the --params flag. That way, you could run a different setup for each database type, dependent on the TestParameters value passed in.
A nicer alternative may be to implement a custom attribute, which would allow you to parameterize SetUpFixtures. (There's an existing discussion on adding this feature here- although it hasn't garnered much interest since 2016.) I think it would be reasonably possible to do this as a custom attribute without modifying NUnit however.
Take a look at how SetUpFixtureAttribute is implemented. I would think you'd want to create your own IFixtureBuilder attribute which works in a similar way, except can be parameterised, and return two suites, with a different setup for each database. I think that would work, although it's not functionality I'm totally familiar with myself.

database clean up in acceptance tests with specflow

I am a newbie in tdd. I have watched Brandon Satrom's videos. I am trying to implement tests like them ,outer loop for acceptance tests and inner loop for unit tests. I have thought acceptance test was againist to Database ,too.So i expect to find examples about [BeginScenario/AfterScenario] events for database clean up in Specflow.It is said to be used for database Clean up. But None of the examples i saw do it.
Am i misundestanding the acceptance test concept? Doesn't it cover the database too? Should we use mock objects there like we did in unit tests?
I'm using a real MS SQL Server database in my integration unit tests (MSTest) and acceptance testing with BDD tool SpecFlow in this way: I have a dump of my test database (MDF/LDF files) stored as a template. On test initialize I copy them to a temporary location, attach them to a dedicated SQL Server using sp_attach_db stored procedure (you may use an Express edition for this), then I run whatever test code I want and on test cleanup I detach the test database and delete the MDF/LDF files. The whole copy/attach/detach/delete cycle is pretty fast (at least much faster than I thought before).
If you're interested, I could put it into some more words on my blog.
At last i am convinced that i must use the real database in my acceptance tests. I have to see some examples, and read it from several resources before i settle it in my mind.
Now i am using acceptence test as supposed for testing the flow of my user interfaces and database.
i wrote a happy path scenerio for my registration page to design page flow. then i wrote some test for logic that kept in my stored procedures in database. Other logic is on controllers and model classes. So for them i used unit tests. Now it makes more sense to me, until my next confusion about tdd :).
As for clean up process, i use [BeginScenario/AfterScenario] events. At BeginScenario i use a global varible to keep a DateTime.Now.Ticks value and merge it in beginnigs of the values that i sent to db. Then i find the records that start with this DateTime.Now.Ticks value when i making the clean up for that scenario at AfterScenario event. So it helped me to make unique values that doesnt interfere with other records. It seemed to work by now.
Regarding this matter, this article, is very helpful.
It describes the use of transactions in MSDTC, starting at BeginScenario and rolled back at AfterScenario.
(SpecFlow is not used in the article, but its the same concept)
We are currently using this technique with success in a mid scale development project.

Unit Testing TSQL

Is there anybody out there writing unit tests for their TSQL stored procedures, triggers, functions ... etc.
I've recently started making database and restores and installs part of our automated Cruise Control build process. Now I'm thinking about taking it to the next level where we do the install, then run through a list of stored procedure tests etc.
I was going to just roll my own using MsBuild Extensions to invoke the tests. However I'm aware of http://www.tsqltest.org/ and http://tsqlunit.sourceforge.net/. I'm also aware that TFS has sql testing.
I just wanted to see what people in the real world are doing and if they have any suggestions.
Thanks
The critical parts:
Make it automated and integrated with your build/test (so you have a
green or red from your build)
Make it easy to add a new test
Keep your tests up-to-date
Advanced:
test failure conditions in your code
make sure your tests clean up after
themselves (TSqlTest's example
scripts use #beforeCount and
#afterCount variables to validate the
clean-up)
Stored procedures. I generally include test queries in comments in the SP header, and record correct results and query times. This still leaves it as a manual exercise, however.)
Functions. Again, put SQL statements in the header with the same info.
Triggers. I avoid them for a number of reasons, one of them being that they are so hard to test and debug for so little benefit compared to putting the same logic in another tier. It's like asking how to test for Referential Integrity.
This is still a manual process, however. But since I think one should intentionally design SQL artifacts to be totally uncoupled (e.g. no SPs calling SPs, same with functions, and another strike against triggers IMHO) it's relatively less complex.
I have used the database testing that is built into Visual Studio 2008 Database Edition on a project here. It works well, but feels more like a third party bolt-on to Visual Studio than a native component. Some of the pains I felt with it are:
Because SQL code lives in the res files and a single code file can include multiple tests, it is not as easy to search for tests based on table/column names.
Because multiple tests live in the same code files, you have some annoying variable name collisions (eg, if you have two tests in a single code file, all of the assertions for those tests have to have unique names; That means your assertion names will probably look like "testname_assertionname", which really shouldn't be necessary).
Refactoring your tests is not easy - for example, if you want to move a test from one code file to another, the easiest way is to create the test from scratch in the new file because there are bits and pieces of the test scattered about the res file and the code file.
All of that said, as I started with - It does work well. Unfortunately, we have not added these tests to our continuous integration server yet, so I can't comment on how easy it is to automate the running of these tests. We are using TFS for CI, and I am assuming that automation of the tests would work very similar to automation of standard unit tests; In other words, it seems like there should be an MSTest command line that would run the tests.
Of course, this is only an option if you are licensed to run Visual Studio 2008 DB Edition (which I understand is now included in the VS 2008 Pro license).
I've done this in java, using dbunit.
Basically, anything you do in the database either:
returns a result set
or alters the state of the database.
The state of the database can be described as all the values in all the rows in all the table in all the schemas of a database; the state of any subset is the state of all the data affect by some test.
So, start with a database filled with enough test data that you can perform you tests, call this the baseline. Extract a snapshot, with dbunit or the tool of your choice.
Given that your database is at baseline, any result set is deterministic (as long as your sp is deterministic, less so, if it does a "select random();").
Get the baseline result set of all your SPs, save those as snapshots with dbunit or whatever tool you're using.
To test operations that don't change state, just test that the result set you get is the one you initially got. To test operations that change the database, test that baseline + operation = expected change. After each test that potentially chnages the db, restoe it to baseline.
Basically, the ability to restore to a baseline makes the testing possible.
Have you tried using the red-gate.com API?
They have a bunch of products for comparing things in SQL Server and the API allows virtually the same functionality programmatically.
http://help.red-gate.com/help/SQLDataCompareAPIv5/4/en/GettingStartedAPI.html

How to Test Web Code?

Does anyone have some good hints for writing test code for database-backend development where there is a heavy dependency on state?
Specifically, I want to write tests for code that retrieve records from the database, but the answers will depend on the data in the database (which may change over time).
Do people usually make a separate development system with a 'frozen' database so that any given function should always return the exact same result set?
I am quite sure this is not a new issue, so I would be very interested to learn from other people's experience.
Are there good articles out there that discuss this issue of web-based development in general?
I usually write PHP code, but I would expect all of these issues are largely language and framework agnostic.
You should look into DBUnit, or try to find a PHP equivalent (there must be one out there). You can use it to prepare the database with a specific set of data which represents your test data, and thus each test will no longer depend on the database and some existing state. This way, each test is self contained and will not break during further database usage.
Update: A quick google search showed a DB unit extension for PHPUnit.
If you're mostly concerned with data layer testing, you might want to check out this book: xUnit Test Patterns: Refactoring Test Code. I was always unsure about it myself, but this book does a great job to help enumerate the concerns like performance, reproducibility, etc.
I guess it depends what database you're using, but Red Gate (www.red-gate.com) make a tool called SQL Data Generator. This can be configured to fill your database with sensible looking test data. You can also tell it to always use the same seed in its random number generator so your 'random' data is the same every time.
You can then write your unit tests to make use of this reliable, repeatable data.
As for testing the web side of things, I'm currently looking into Selenium (selenium.openqa.org). This appears to be a cross-browser capable test suite which will help you test functionality. However, as with all of these web site test tools, there's no real way to test how well these things look in all of the browsers without casting a human eye over them!
We use an in-memory database (hsql : http://hsqldb.org/). Hibernate (http://www.hibernate.org/) makes it easy for us to point our unit tests at the testing db, with the added bonus that they run as quick as lightning..
I have the exact same problem with my work and I find that the best idea is to have a PHP script to re-create the database and then a separate script where I throw crazy data at it to see if it breaks it.
I have not ever used any Unit testing or suchlike so cannot say if it works or not sorry.
If you can setup the database with a known quantity prior to running the tests and tear down at the end, then you'll know what data you are working with.
Then you can use something like Selenium to easily test from your UI (assuming web-based here, but there are a lot of UI testing tools out there for other UI-flavours) and detect the presence of certain records pulled back from the database.
It's definitely worth setting up either a test version of the database - or make your test scripts populate the database with known data as part of the tests.
You could try http://selenium.openqa.org/ it is more for GUI testing rather than a data layer testing application but does record your actions which then can be played back to automate tests across different platforms.
Here's my strategy (I use JUnit, but I'm sure there's a way to do the equivalent in PHP):
I have a method that runs before all of the Unit Tests for a specific DAO class. It puts the dev database into a known state (adds all test data, etc.). As I run tests, I keep track of any data added to the known state. This data is cleaned up at the end of each test. After all the tests for the class have run, another method removes all the test data in the dev database, leaving it in the state it was in before the tests were run. It's a bit of work to do all this, but I usually write the methods in a DBTestCommon class where all of my DAO test classes can get to them.
I would propose to use three databases. One production database, one development database (filled with some meaningful data for each developer) and one testing database (with empty tables and maybe a few rows that are always needed).
A way to test database code is:
Insert a few rows (using SQL) to initialize state
Run the function that you want to test
Compare expected with actual results. Here you could use your normal unit testing framework
Clean up the rows that were changed (so the next run won't see the previous run)
The cleanup could be done in a standard way (of course, only in the testing database) with DELETE * FROM table.
In general I agree with Peter but for creating and deleting of test data I wouldn't use SQL directly. I prefer to use some CRUD API that is used in product to create data as similar to production as possible...

How do I unit test persistence?

As a novice in practicing test-driven development, I often end up in a quandary as to how to unit test persistence to a database.
I know that technically this would be an integration test (not a unit test), but I want to find out the best strategies for the following:
Testing queries.
Testing inserts. How do I know that the insert that has gone wrong if it fails? I can test it by inserting and then querying, but how can I know that the query wasn't wrong?
Testing updates and deletes -- same as testing inserts
What are the best practices for doing these?
Regarding testing SQL: I am aware that this could be done, but if I use an O/R Mapper like NHibernate, it attaches some naming warts in the aliases used for the output queries, and as that is somewhat unpredictable I'm not sure I could test for that.
Should I just, abandon everything and simply trust NHibernate? I'm not sure that's prudent.
Look into DB Unit. It is a Java library, but there must be a C# equivalent. It lets you prepare the database with a set of data so that you know what is in the database, then you can interface with DB Unit to see what is in the database. It can run against many database systems, so you can use your actual database setup, or use something else, like HSQL in Java (a Java database implementation with an in memory option).
If you want to test that your code is using the database properly (which you most likely should be doing), then this is the way to go to isolate each test and ensure the database has expected data prepared.
As Mike Stone said, DbUnit is great for getting the database into a known state before running your tests. When your tests are finished, DbUnit can put the database back into the state it was in before you ran the tests.
DbUnit (Java)
DbUnit.NET
You do the unit testing by mocking out the database connection. This way, you can build scenarios where specific queries in the flow of a method call succeed or fail. I usually build my mock expectations so that the actual query text is ignored, because I really want to test the fault tolerance of the method and how it handles itself -- the specifics of the SQL are irrelevant to that end.
Obviously this means your test won't actually verify that the method works, because the SQL may be wrong. This is where integration tests kick in. For that, I expect someone else will have a more thorough answer, as I'm just beginning to get to grips with those myself.
I have written a post here concerning unit testing the data layer which covers this exact problem. Apologies for the (shameful) plug, but the article is too long to post here.
I hope that helps you - it has worked very well for me over the last 6 months on 3 active projects.
Regards,
Rob G
The problem I experienced when unit testing persistence, especially without an ORM and thus mocking your database (connection), is that you don't really know if your queries succeed. It could be that you your queries are specifically designed for a particular database version and only succeed with that version. You'll never find that out if you mock your database. So in my opinion, unit testing persistence is only of limited use. You should always add tests running against the targeted database.
For NHibernate, I'd definitely advocate just mocking out the NHibernate API for unit tests -- trust the library to do the right thing. If you want to ensure that the data actually goes to the DB, do an integration test.
For JDBC based projects, my Acolyte framework can be used: http://acolyte.eu.org . It allows to mockup data access you want to tests, benefiting from JDBC abstraction, without having to manage a specific test DB.
I would also mock the database, and check that the queries are what you expected. There is the risk that the test checks the wrong sql, but this would be detected in the integration tests
I usually create a repository, use that to save my entity and retrieve a fresh one. Then I assert that the retrieved is equal to the saved.
Technically unit tests of persistance are not unit tests. They are integration tests.
With C# using mbUnit, you simply use the SqlRestoreInfo and RollBack attributes:
[TestFixture]
[SqlRestoreInfo(<connectionsting>, <name>,<backupLocation>]
public class Tests
{
[SetUp]
public void Setup()
{
}
[Test]
[RollBack]
public void TEST()
{
//test insert.
}
}
The same can be done in NUnit, except the attribute names differ slightly.
As for checking, if your query iss successful, you normally need to follow it with a second query to see if the database has been changed as you expected.

Resources