What is a good approach to write Automated tests that depend on data that needs to be setup before executing the test - selenium-webdriver

I am currently working on writing automated tests using Selenium Webdriver. We use MTM to run our test suites. I need some ideas as to what would be a good way to write these tests.
Currently before running these tests, we perform a basic setup that sets the username and password that would be required to login to the site, set the browser that the test should use, and few other things.
Currently the data that is required for each of the test is setup manually and is already present in the database . The test simply performs a keyword search, finds the necessary data it needs and then performs the assertions. What we would like to achieve is find such data that is already present in the database and use it instead of creating it manually. That way I can run these tests across different environments(dev,qa,production).
The site I am testing is an e-commerce website. I mostly write tests for specific features that my team develops, and thus many of these tests require some specific data. e.g setting up a store that has products with certain shipping rates, with particular offers etc. I would like to find a way to automate or almost remove this manual process of setting up the data. That way I have the flexibility to run these tests across environments. Could you please direct me to some articles/suggestions that can help me achieve this ?

If I am understanding your question correctly, you want to automate the test data setup.
You can achieve this in following ways:
If possible, write a sql script which inserts the desired data in db. Now you can execute this while running your tests. If you are using TestNG framework, then there is already an annotation available like #BeforeTest. You can execute that sql script in this annotation, it will be executed once before your test and data is ready.
Prepare data in a spreadsheet. Create an algorithm, fill the data dynamically in spreadsheet and from there either read directly and fetch it to your test using #BeforeTest or if required, data in spreadsheet can be inserted in db also.

Related

Cleaning up added records to the database after UI acceptance tests

I know there have been a lot of questions asked about cleaning up data once a test is complete. A lot of them have said to mock the database to avoid using the real database then just clean that up once the test is done. I am not sure if that will work with what I am doing so here it goes.
I am using SpecFlow for .net, using Selenium for the WebUI and NUnit for the test runner.
The application itself is a large muli-page web app.
The SpecFlow features are separated by page functionality and most if not all pages have a table displaying the created records. Ex. I create a new category and the page displays the added category in the table. To be able to run these tests over and over, I need to remove all added records that the tests created from the database so those same categories can be recreated when the tests get reran.
We have a skeleton setup to run after each feature that will pass in a stored procedure to delete those added records from the database. There has been a lot of push back on that idea because of the risk of deleting records for a different test client in the test environment.
So, my question is, what is the best practice for cleaning up the database?
It's best to delete the test data both before and after the test runs. This way the data will be cleaned up even if a test aborted half way through and doesn't clean up afterwards.
In specflow this can be achieved by using before scenario/after scenario/before feature/after feature hooks.
If possible the ideal solution is to have a new database for each test then you can just delete the entire database. This will allow the tests to be run in parallel.
If you can't do that then you want some way to identify the test data uniquely for each test.
It's worrying that your question implies test and live data in the same database

Data Driven Testing for a database web application

I have a database web application and I need to see all the possible inputs and all the possible outputs of this application (using Selenium or Jmeter).
Actually I tried to understand how the "Input Coverage Method" works in software testing tools but it seems too tough. If I'm not wrong this kind of testing I'm trying to do is a kind of Data Driven testing (means figuring out all the possible input and output of an database web application).
Would you please give a suggestion if there is any tool (I prefer open source) that can do this or any method to create such that test?
Do I have to create it by my own?
First of all you need to create Equivalence Classes that cover most of your input dataset.
After that you can simply run your selenium/JMeter tests with the test data created.
You just need to create single test script and populate the test data in excel or CSV sheets to perform data driven driven testing.
Have a look a jBehave.
It's a BBD tool that can drive selenium and supports sets of input test data.
I've used it and it works well. You'll need patience to get through the glue code, but once you're out the other side you'll be glad you persevered.

Unit tests in a database driven CodeIgniter web-application

CodeIgniter comes with a Unit Testing class built in, and I would very much like to use it. However, almost all functions I would want to test interact with the database by adding records, deleting records, etc. How would I, for example, write tests for the 'create user' function without actually creating users every time I run the test?
Upon some further research, it seems I need to be using Mock objects for external services like the database, etc. I haven't been able to find much in the way of docs on how to do that besides this one forum thread:
http://codeigniter.com/forums/viewthread/106737
Is there any actual documentation?
If your database driver allows transactions, use them. Do whatever needs to be tested, then rollback (on success or failure).
I've found that it's hard to run unit tests with controller actions. If you find a good way of doing that, let us know!

database clean up in acceptance tests with specflow

I am a newbie in tdd. I have watched Brandon Satrom's videos. I am trying to implement tests like them ,outer loop for acceptance tests and inner loop for unit tests. I have thought acceptance test was againist to Database ,too.So i expect to find examples about [BeginScenario/AfterScenario] events for database clean up in Specflow.It is said to be used for database Clean up. But None of the examples i saw do it.
Am i misundestanding the acceptance test concept? Doesn't it cover the database too? Should we use mock objects there like we did in unit tests?
I'm using a real MS SQL Server database in my integration unit tests (MSTest) and acceptance testing with BDD tool SpecFlow in this way: I have a dump of my test database (MDF/LDF files) stored as a template. On test initialize I copy them to a temporary location, attach them to a dedicated SQL Server using sp_attach_db stored procedure (you may use an Express edition for this), then I run whatever test code I want and on test cleanup I detach the test database and delete the MDF/LDF files. The whole copy/attach/detach/delete cycle is pretty fast (at least much faster than I thought before).
If you're interested, I could put it into some more words on my blog.
At last i am convinced that i must use the real database in my acceptance tests. I have to see some examples, and read it from several resources before i settle it in my mind.
Now i am using acceptence test as supposed for testing the flow of my user interfaces and database.
i wrote a happy path scenerio for my registration page to design page flow. then i wrote some test for logic that kept in my stored procedures in database. Other logic is on controllers and model classes. So for them i used unit tests. Now it makes more sense to me, until my next confusion about tdd :).
As for clean up process, i use [BeginScenario/AfterScenario] events. At BeginScenario i use a global varible to keep a DateTime.Now.Ticks value and merge it in beginnigs of the values that i sent to db. Then i find the records that start with this DateTime.Now.Ticks value when i making the clean up for that scenario at AfterScenario event. So it helped me to make unique values that doesnt interfere with other records. It seemed to work by now.
Regarding this matter, this article, is very helpful.
It describes the use of transactions in MSDTC, starting at BeginScenario and rolled back at AfterScenario.
(SpecFlow is not used in the article, but its the same concept)
We are currently using this technique with success in a mid scale development project.

How to Test Web Code?

Does anyone have some good hints for writing test code for database-backend development where there is a heavy dependency on state?
Specifically, I want to write tests for code that retrieve records from the database, but the answers will depend on the data in the database (which may change over time).
Do people usually make a separate development system with a 'frozen' database so that any given function should always return the exact same result set?
I am quite sure this is not a new issue, so I would be very interested to learn from other people's experience.
Are there good articles out there that discuss this issue of web-based development in general?
I usually write PHP code, but I would expect all of these issues are largely language and framework agnostic.
You should look into DBUnit, or try to find a PHP equivalent (there must be one out there). You can use it to prepare the database with a specific set of data which represents your test data, and thus each test will no longer depend on the database and some existing state. This way, each test is self contained and will not break during further database usage.
Update: A quick google search showed a DB unit extension for PHPUnit.
If you're mostly concerned with data layer testing, you might want to check out this book: xUnit Test Patterns: Refactoring Test Code. I was always unsure about it myself, but this book does a great job to help enumerate the concerns like performance, reproducibility, etc.
I guess it depends what database you're using, but Red Gate (www.red-gate.com) make a tool called SQL Data Generator. This can be configured to fill your database with sensible looking test data. You can also tell it to always use the same seed in its random number generator so your 'random' data is the same every time.
You can then write your unit tests to make use of this reliable, repeatable data.
As for testing the web side of things, I'm currently looking into Selenium (selenium.openqa.org). This appears to be a cross-browser capable test suite which will help you test functionality. However, as with all of these web site test tools, there's no real way to test how well these things look in all of the browsers without casting a human eye over them!
We use an in-memory database (hsql : http://hsqldb.org/). Hibernate (http://www.hibernate.org/) makes it easy for us to point our unit tests at the testing db, with the added bonus that they run as quick as lightning..
I have the exact same problem with my work and I find that the best idea is to have a PHP script to re-create the database and then a separate script where I throw crazy data at it to see if it breaks it.
I have not ever used any Unit testing or suchlike so cannot say if it works or not sorry.
If you can setup the database with a known quantity prior to running the tests and tear down at the end, then you'll know what data you are working with.
Then you can use something like Selenium to easily test from your UI (assuming web-based here, but there are a lot of UI testing tools out there for other UI-flavours) and detect the presence of certain records pulled back from the database.
It's definitely worth setting up either a test version of the database - or make your test scripts populate the database with known data as part of the tests.
You could try http://selenium.openqa.org/ it is more for GUI testing rather than a data layer testing application but does record your actions which then can be played back to automate tests across different platforms.
Here's my strategy (I use JUnit, but I'm sure there's a way to do the equivalent in PHP):
I have a method that runs before all of the Unit Tests for a specific DAO class. It puts the dev database into a known state (adds all test data, etc.). As I run tests, I keep track of any data added to the known state. This data is cleaned up at the end of each test. After all the tests for the class have run, another method removes all the test data in the dev database, leaving it in the state it was in before the tests were run. It's a bit of work to do all this, but I usually write the methods in a DBTestCommon class where all of my DAO test classes can get to them.
I would propose to use three databases. One production database, one development database (filled with some meaningful data for each developer) and one testing database (with empty tables and maybe a few rows that are always needed).
A way to test database code is:
Insert a few rows (using SQL) to initialize state
Run the function that you want to test
Compare expected with actual results. Here you could use your normal unit testing framework
Clean up the rows that were changed (so the next run won't see the previous run)
The cleanup could be done in a standard way (of course, only in the testing database) with DELETE * FROM table.
In general I agree with Peter but for creating and deleting of test data I wouldn't use SQL directly. I prefer to use some CRUD API that is used in product to create data as similar to production as possible...

Resources