I am trying to test some legacy database code. I have many situations where I want to
Set database in empty state
For a group of tests do the following for each group:
for a group of tests, set the db to an inital test state
run those tests
return db back to empty state
In NUnit, when using a TestFixture am I guaranteed that the entire fixture's test will be run together as well as the TestFixtureTearDown before the next TestFixture gets processed? I've tried this using the Visual Studio test stuff and this doesn't appear to be the case.
The main reason for trying to do this, is that sometimes the process of getting the db to the state in step 2 can be expensive and I don't want to have to run this for each of the test cases.
TestFixtureTearDown will be executed once the tests with the TestFixture are completed. Coupling this with TestFixtureSetUp should provide the beahivor your are seeking on a TestFixture basis.
The Unit Testing Framework in Visual Studio does not have the same syntax as NUnit. While the test approach is the same for all xUnit based frameworks the syntax will vary.
Related
I know there have been a lot of questions asked about cleaning up data once a test is complete. A lot of them have said to mock the database to avoid using the real database then just clean that up once the test is done. I am not sure if that will work with what I am doing so here it goes.
I am using SpecFlow for .net, using Selenium for the WebUI and NUnit for the test runner.
The application itself is a large muli-page web app.
The SpecFlow features are separated by page functionality and most if not all pages have a table displaying the created records. Ex. I create a new category and the page displays the added category in the table. To be able to run these tests over and over, I need to remove all added records that the tests created from the database so those same categories can be recreated when the tests get reran.
We have a skeleton setup to run after each feature that will pass in a stored procedure to delete those added records from the database. There has been a lot of push back on that idea because of the risk of deleting records for a different test client in the test environment.
So, my question is, what is the best practice for cleaning up the database?
It's best to delete the test data both before and after the test runs. This way the data will be cleaned up even if a test aborted half way through and doesn't clean up afterwards.
In specflow this can be achieved by using before scenario/after scenario/before feature/after feature hooks.
If possible the ideal solution is to have a new database for each test then you can just delete the entire database. This will allow the tests to be run in parallel.
If you can't do that then you want some way to identify the test data uniquely for each test.
It's worrying that your question implies test and live data in the same database
I am currently working on writing automated tests using Selenium Webdriver. We use MTM to run our test suites. I need some ideas as to what would be a good way to write these tests.
Currently before running these tests, we perform a basic setup that sets the username and password that would be required to login to the site, set the browser that the test should use, and few other things.
Currently the data that is required for each of the test is setup manually and is already present in the database . The test simply performs a keyword search, finds the necessary data it needs and then performs the assertions. What we would like to achieve is find such data that is already present in the database and use it instead of creating it manually. That way I can run these tests across different environments(dev,qa,production).
The site I am testing is an e-commerce website. I mostly write tests for specific features that my team develops, and thus many of these tests require some specific data. e.g setting up a store that has products with certain shipping rates, with particular offers etc. I would like to find a way to automate or almost remove this manual process of setting up the data. That way I have the flexibility to run these tests across environments. Could you please direct me to some articles/suggestions that can help me achieve this ?
If I am understanding your question correctly, you want to automate the test data setup.
You can achieve this in following ways:
If possible, write a sql script which inserts the desired data in db. Now you can execute this while running your tests. If you are using TestNG framework, then there is already an annotation available like #BeforeTest. You can execute that sql script in this annotation, it will be executed once before your test and data is ready.
Prepare data in a spreadsheet. Create an algorithm, fill the data dynamically in spreadsheet and from there either read directly and fetch it to your test using #BeforeTest or if required, data in spreadsheet can be inserted in db also.
I want to create tests for all my CRUD's. But how do I set a separate database for them? Is that the best way to go?
This is another question, but it is related: Should I run the tests in the production server too? Sometimes things can go wrong in different enviroments, so I guess I should. But then I need the mentioned separate database, right?
Any advice?
Running any kind of tests on a production server is generally a bad idea (unless it's just the production hardware that hasn't been commisioned yet).
A Unit Test does not hit the database (or any other external system). So, in order to create a unit test you need to remove the dependency on the database.
What you are calling a 'unit test' is probably an integration test. Any test that utilises an external system (such as a database, file system etc.) is an integration test.
Two common solutions to your problem are:
At the start of your test, restore a database backup containing
known data to a separate test database, then perform your tests
against it.
Using a 'fixed' known test database, at the start of each test start
a transaction, perform the test and then rollback the transaction to
leave the database in the same known state.
(No. 1 is often preferable, as the database in (2) can become 'polluted').
I agree with Mitch. I would add that you should decide whether you want to do an integration test or a unit test (or both, but not in the same test). If, in fact, you do want to do a unit test, realize:
Your code has a "dependency" on an external database.
When unit testing you'll have to find a way to "fake" the database. You want to test a "unit" which means a single thing, not two or more things (i.e. your CRUD code AND your connection to a database AND the database itself).
Typically you'll need to refactor your code using something like dependency injection so that when unit testing you can fake things that your code depends on.
Unit testing isn't just testing your code. That's actually the easy part. The harder part is making your code testable.
I recommend going to http://artofunittesting.com/ and watching the free videos on the right side under the heading "Unit Testing Videos". Forget the fact that he's working in .NET as it's the principles that are important.
Then watch the GoogleTechTalks by Misko Hevery where he explains why you want to do dependency injection.
Design Tech Talk Series Presents: OO Design for Testability
The Clean Code Talks -- Unit Testing
(He has more too. There is a series of six GoogleTechTalks.)
I had a similar problem today and I think I've found a good solution.
Make a copy of your database (creating a new empty database works as well).
Edit your config_test.yml to change the database name.
A sample of my test configuration (might be different depending if you have multiple db-s etc)
doctrine:
dbal:
dbname: test_db
Update your database to reflect the entities in your application by calling php app/console doctrine:schema:update --force --env=test (required, if you just created a new db as well as every time you change your application model).
Your application should now use the test database during unit tests. NB! Be sure to make a backup of your database before messing around with the live database.
However, as clearly mentioned before, these are not Unit Tests anymore and instead an integration tests.
I am a newbie in tdd. I have watched Brandon Satrom's videos. I am trying to implement tests like them ,outer loop for acceptance tests and inner loop for unit tests. I have thought acceptance test was againist to Database ,too.So i expect to find examples about [BeginScenario/AfterScenario] events for database clean up in Specflow.It is said to be used for database Clean up. But None of the examples i saw do it.
Am i misundestanding the acceptance test concept? Doesn't it cover the database too? Should we use mock objects there like we did in unit tests?
I'm using a real MS SQL Server database in my integration unit tests (MSTest) and acceptance testing with BDD tool SpecFlow in this way: I have a dump of my test database (MDF/LDF files) stored as a template. On test initialize I copy them to a temporary location, attach them to a dedicated SQL Server using sp_attach_db stored procedure (you may use an Express edition for this), then I run whatever test code I want and on test cleanup I detach the test database and delete the MDF/LDF files. The whole copy/attach/detach/delete cycle is pretty fast (at least much faster than I thought before).
If you're interested, I could put it into some more words on my blog.
At last i am convinced that i must use the real database in my acceptance tests. I have to see some examples, and read it from several resources before i settle it in my mind.
Now i am using acceptence test as supposed for testing the flow of my user interfaces and database.
i wrote a happy path scenerio for my registration page to design page flow. then i wrote some test for logic that kept in my stored procedures in database. Other logic is on controllers and model classes. So for them i used unit tests. Now it makes more sense to me, until my next confusion about tdd :).
As for clean up process, i use [BeginScenario/AfterScenario] events. At BeginScenario i use a global varible to keep a DateTime.Now.Ticks value and merge it in beginnigs of the values that i sent to db. Then i find the records that start with this DateTime.Now.Ticks value when i making the clean up for that scenario at AfterScenario event. So it helped me to make unique values that doesnt interfere with other records. It seemed to work by now.
Regarding this matter, this article, is very helpful.
It describes the use of transactions in MSDTC, starting at BeginScenario and rolled back at AfterScenario.
(SpecFlow is not used in the article, but its the same concept)
We are currently using this technique with success in a mid scale development project.
Is there anybody out there writing unit tests for their TSQL stored procedures, triggers, functions ... etc.
I've recently started making database and restores and installs part of our automated Cruise Control build process. Now I'm thinking about taking it to the next level where we do the install, then run through a list of stored procedure tests etc.
I was going to just roll my own using MsBuild Extensions to invoke the tests. However I'm aware of http://www.tsqltest.org/ and http://tsqlunit.sourceforge.net/. I'm also aware that TFS has sql testing.
I just wanted to see what people in the real world are doing and if they have any suggestions.
Thanks
The critical parts:
Make it automated and integrated with your build/test (so you have a
green or red from your build)
Make it easy to add a new test
Keep your tests up-to-date
Advanced:
test failure conditions in your code
make sure your tests clean up after
themselves (TSqlTest's example
scripts use #beforeCount and
#afterCount variables to validate the
clean-up)
Stored procedures. I generally include test queries in comments in the SP header, and record correct results and query times. This still leaves it as a manual exercise, however.)
Functions. Again, put SQL statements in the header with the same info.
Triggers. I avoid them for a number of reasons, one of them being that they are so hard to test and debug for so little benefit compared to putting the same logic in another tier. It's like asking how to test for Referential Integrity.
This is still a manual process, however. But since I think one should intentionally design SQL artifacts to be totally uncoupled (e.g. no SPs calling SPs, same with functions, and another strike against triggers IMHO) it's relatively less complex.
I have used the database testing that is built into Visual Studio 2008 Database Edition on a project here. It works well, but feels more like a third party bolt-on to Visual Studio than a native component. Some of the pains I felt with it are:
Because SQL code lives in the res files and a single code file can include multiple tests, it is not as easy to search for tests based on table/column names.
Because multiple tests live in the same code files, you have some annoying variable name collisions (eg, if you have two tests in a single code file, all of the assertions for those tests have to have unique names; That means your assertion names will probably look like "testname_assertionname", which really shouldn't be necessary).
Refactoring your tests is not easy - for example, if you want to move a test from one code file to another, the easiest way is to create the test from scratch in the new file because there are bits and pieces of the test scattered about the res file and the code file.
All of that said, as I started with - It does work well. Unfortunately, we have not added these tests to our continuous integration server yet, so I can't comment on how easy it is to automate the running of these tests. We are using TFS for CI, and I am assuming that automation of the tests would work very similar to automation of standard unit tests; In other words, it seems like there should be an MSTest command line that would run the tests.
Of course, this is only an option if you are licensed to run Visual Studio 2008 DB Edition (which I understand is now included in the VS 2008 Pro license).
I've done this in java, using dbunit.
Basically, anything you do in the database either:
returns a result set
or alters the state of the database.
The state of the database can be described as all the values in all the rows in all the table in all the schemas of a database; the state of any subset is the state of all the data affect by some test.
So, start with a database filled with enough test data that you can perform you tests, call this the baseline. Extract a snapshot, with dbunit or the tool of your choice.
Given that your database is at baseline, any result set is deterministic (as long as your sp is deterministic, less so, if it does a "select random();").
Get the baseline result set of all your SPs, save those as snapshots with dbunit or whatever tool you're using.
To test operations that don't change state, just test that the result set you get is the one you initially got. To test operations that change the database, test that baseline + operation = expected change. After each test that potentially chnages the db, restoe it to baseline.
Basically, the ability to restore to a baseline makes the testing possible.
Have you tried using the red-gate.com API?
They have a bunch of products for comparing things in SQL Server and the API allows virtually the same functionality programmatically.
http://help.red-gate.com/help/SQLDataCompareAPIv5/4/en/GettingStartedAPI.html