Using specflow for WPF in MVVM - wpf

I am trying specflow for the first time for a WPF project with MVVM pattern. So, I have repository, controller, view and view model. Repository makes calls to webservice wich hits the database and gives back the data. View model has methods to validate the input from the user and make calls to repository methods.
In my specflow, should I be making a complete call including the repository methods or should I mock those methods using Moq? Does it make sense?

The short answer:
You could make you calls hit the database, but I would recommend mocking them out, unless you have logic in your DB
The long answer:
There's two competing ways of looking at this question. Firstly what kind of testing are you expecting to use SpecFlow for, and secondly how easy is it going to be to access your database.
If you are using SpecFlow to test your low level technical requirements, then you really are doing unit testing in a Specification by example style. So this really ought to dictate that you use mocking to isolate your units. (Personally I'd stick to using NUnit for these tests.)
If however you are using SpecFlow to test your business scenarios (i.e. Acceptance testing) then your scenarios are reliant on more than one unit to provide the functionality. This is more like Integration or System testing and so many components will need to be involved and in theory since we are testing the whole system, we should include a DB in that, particularly if you have even one stored procedure or view that you might want to regression test later.
However, the second way of looking at this is how much pain is required to have a database available for your testing.
If there is more than one person in your development team, what strategies would you need to avoid failing if you run your tests at the same time?
For example you could generate a new unique customer in each run and only access updates and queries against that customer, then you won't end up with the wrong number of orders.
How will you reset your database after your tests run?
E.g. If you use the right kind of database you can check an empty .mdb into your source control and just rollback/revert to that.
I've personally found that for a simple DB without stored procs and views, mocking is better, but as soon as you've added business logic into the DB, well, you need to test that logic somehow.

Related

SpecFlow Integration Testing with Database Patterns

I'm attempting to set up SpecFlow for integration/acceptance testing. Our product has a backing database (not a huge one though) in Sqlite.
This is actually proving to be a slightly sticky point though; how do I model the database for the tests?
I would like to know what patterns others out there use for doing integration/acceptance testing with backing databases.
I can think of the following approaches:
Compile a database into the assembly with the tests, then shadow-copy it for each test. Seems slow though.
I could create the database in memory and populate it with pre-determined data.
I could create the database in memory and somehow have Givens populate the database. This seems like it would bloat the tests horribly, but might give them more control and make the tests less fragile.
I could abstract every database interaction and use mocks. Not in love with this idea since I'd like to use this to test the database interactions as well.
Compile the database into the tests and rely on clean-up code to return it to the base state (this one seems dodgy to me). Don't want to do it with transactions since there will be multiple interactions with some tests (so write an item then attempt to read it back with different privileges).
Before considering the How to test, I think you might find it valuable to look at What you want to test.
Starting with what data, I find that it really helps to take a single element, or a small number, and imagine a set of events around them in order to give you the right test data to run your tests with. For example;
If you were working on a healthcare system, you might define a person "Bob" and then produce his life events. Bob was born 37 years ago today, fell off his bike as a child and broke his arm, got married, and has two children.
If you are working on a financial trading system, you might look at a day between opening and closing for a couple of stocks, e.g. "MSFT" and "APPL". On this day you might see one starting low and climbing, the other starting high and falling. A piece of news comes out that reverses their fortunes.
Now you have the what you can actually evaluate which of your scenarios actually work for your data, e.g. “MSFT” and “APPL” could have 1,000s of price changes throughout the day, so generating the Givens and Mocks would be very time consuming. This data lends itself to being pre-captured. On the other hand the “Bob” data works particularly well when using generated data because the data can always change so that it is his birthday today.
One thing your question doesn’t seem to need to consider is updating your data. For example you might want to have a set of tests that work at various stages of your entities life cycle, e.g. Some tests deal with “Baby Bob”, others with “10yr old Bob”,or “Married Bob” etc. If your DB is read only then this isn’t a problem if you can write your tests so that they just don’t see the other data, but sometimes you want build a story through your tests. If your tests do change the data, then you will have problems with ensuring that either your tests run in order (see MSTest OrderedTest or mbUnit DependsOn), or that you can separate your tests so they each deal with an isolated data entity (this is fine if your entity can be described in a single row, but gets harder when you have to read many tables to get it).
You also might want to consider what code you are testing, you can vary the approach inside your different test sets. I currently work on a multi-tier application that has a UI Views, View Models, Client Models, multiple communication systems, and server models. I also have different sets of tests for these. I have some tests that work in a single tier, mocking out other tiers to keep my tests small. Other tests fire up a local server and local client and wire the two up directly. Finally I have some tests that launch a full server process, communicate via EMS and run some simple client side operations using everything but the UI Views.
So now to actually answer your question,
Shadow copy your database - Yes, I’ve done this once with SQLServer Developer and had an xxx.mdb that got copied in before running the tests. However some modern testing frameworks will run tests in parallel e.g. NCrunch, so this just breaks.
Create the database and pre-populate - Not done this one, but my concerns would be what happens where a test changes the database to an unexpected state. Other tests will fail when they have done nothing wrong.
Create the database and use Givens - I’ve done this with NUnit via [SetupFixture]on top of a Linq-to-sql DB.You still have concerns about parallel test runs and you have to balance the granularity of your givens (see StackOverflow-When do BDD scenarios become too specific), and you have the data update ordering/data isolation problem, but this can work really well to allow you to work through your data stories and grow the data throughout your tests. On the other hand, should one test fail and leave the data in a bad state you can end up with lots of failures, but at least you simply need to look at the one that fails first. This kind of testing will also be not play very nicely for developers on their workstation as they can’t just run a single test, particularly with tools such as NCrunch, which can just run tests whose code has changed.
Mock the database This is how I choose to do things now. The trick is that if you are personally following a reasonably strict TDD process where you only test the method you are working on, then you actually end up with some tiers that test the database interaction, e.g. [Test]DALLayerTests.ShouldReadARowAndCreatePOCO(), but most others that used mocked data to test what actually happens e.g. [Test]BusinessObjectPersonTests.ShouldGetBirthdayCongratulations().
Use clean up code - Never tried it, it sounds dodgy :-)

unit testing restful webservices

I am wondering if anyone knows the proper way to unit test a restful webservice. I have a set of webservices built using recess, and I would like to write test code for them. Unfortunately, since my webservices are tied to a database, my tests end up populating the database which seems like a problem.
I am mostly asking about the proper approach to dealing with this from a unit test standpoint. Do I clear the database of the values I have inserted after testing? Do I have a special test database with a whole set of special test routes? I am at a bit of a loss for the best way to approach this.
Obviously in other cases of similar database wrapper classes you would just pass in a dummy database that you set up at the beginning of the tests. This seems like it would be much more challenging though when it comes to working with a restful framework like recess.
I'd appreciate any thought you all might have on the right way to deal with tests saving information to the database.
Thanks in advance.
Generally when testing a web service you are testing the full stack, from the outside in. This means you request a resource and check if the results conform to your expectations.
In nearly all cases populating the database right before every request is a good approach. It might seem like overkill, but in reality with a web service you can't guarantee proper test coverage by mocking/stubbing various elements.
Coming from the Ruby world, Cucumber is the ideal approach as it lets you test from a high level. When you combine this with Rspec to do actual unit testing (lower level tests that query your objects directly) you get the best of both worlds. These libraries even come with something called 'database cleaner' which will manage populating and depopulating the database for you.
You might find the following blog post by Rspec's author very helpful, as it explains brilliantly why you should avoid too much mocking and stubbing. http://blog.davidchelimsky.net/2011/09/22/avoid-stubbing-methods-invoked-by-a-framework/
Generally speaking you have two options:
1) Use a dedicated test database with known data on which you can set your expectations - replace the DB with a "pristine DB" before starting testing. This would be considered integration testing since you are in fact dependent on the database.
2.) Make your code independent of the actual data store and pass in the dependency to the persistence layer. For unit testing you can write (or mock out) a custom persistence layer/object that allows you to observe the state changes that your are unit testing.
A healthy mix of both depending on the scenario usually provides good coverage.
Also instead of testing your Restful web service consider just delegating to a POCO within each service endpoint and then just test these POCOs directly - much easier testable and all you have left to do is verifying the mapping between service endpoint and POCO.
My understanding is that if you do your tests in this order you can test all the verbs but there will be no additional data in the DB at the end.
POST ( add a new record)
GET ( fetch the newly added record)
PUT/PATCH ( modify the newly added record)
DELETE (delete the newly added record)
Of course somebody else using the database at the same time might see transient values during the duration of the test.

Strategies for testing against a live system

Let me describe the scenario:
Our VB.NET web application talks to a third party data provider via webservices. It also saves data in a HUGE SQLServer database which has extensive business logic implemented in stored procs and triggers. The webservice provider also employs convoluted business logic which is quite dynamic.
The dilemma is that both the webservice provider and the sqlserver are essentially live systems and our company has no access to them outside of normal operations (web and SQL calls). Neither offer helpful support on their end.
Additionally, the webservice has no 'test mode' and all calls are treated as live transactions. It is not possible to mimic the logic in either of these systems, nor is it possible getting a copy of the sqlserver DB.
So, my question is:
How do you do any kind of testing (manual or automated) of our VB.NET application against these live systems?
Your suggestions would be appreciated.
Let's start by challenging your assumption "It is not possible to mimic the logic in either of these systems".
Of course you can mimic the behavior, if you use the right tools. Your system interacts with the database by way of a database object; your system should interact with the web service through a web service object. Either or both of these objects can be mocked through the use of an appropriate mocking framework. For any unit test call to the database object, you create a mock for that object, set its expectations and result for the call, and then hand the mock to your Code Under Test (CUT) instead of the 'real' database object. Your code calls the mock, which then compares its arguments against the pre-set expectations, and hands back the expected result (instead of actually communicating with the database). Your code then operates on the result. If the method arguments don't match the expectations, the mock object will throw an exception.
You can read articles about mock objects and unit testing for .Net here:
Mock Objects to the Rescue! Test Your .NET Code with NMock
How Typemock Isolator Simplifies Unit Testing
Mind you, tools like NMock and Typemock make the job easier, but it's still hard -- you need to design your code to be tested, not just write the code first and pray that you can test it later.
You might want to talk to your webservice provider -- every third-party webservice that I've ever interacted with beyond simple queries has had a test mode (you use test credentials and a test server, instead of the live server). Any transactions to the test server get cleaned up at the end of the day. If they don't offer a test service AND their service involves more than simple queries, then I'd strongly recommend finding another service provider.
There's one other strategy that you can take for working with a database, under certain circumstances: use transactions. When you open your database connection during a unit test, open a transaction. At the end of each unit test, rollback the transaction. It's a simple idea, but the devil is in the details, and there will be chaos if you screw up and accidentally commit the transaction. I don't recommend it, but I worked like this for 2 years on one project.
It sounds like you're building a line-of-business application: software that will be used in one place, by one company. In that scenario testing is not really that important. The best testers find around 20% of the issues in your code, so whether you test or not, you still have to be prepared to deal with issues in production.
So, I'd skip the testing, and instead focus on powerful scaffolding that allows you to develop the application against the live servers in a controlled way:
Create real good logging. Up to half the code can deal with exception handling and logging. If you can, log to a database.
Create lots and lots of consistency checks. The more assumptions you check, the better. When you do a write operation (like placing a web service order) double the amount of checks. If a check fails, stop the application entirely. Don't limp on: stop and analyze what went wrong.
Develop the application one part at a time. Test each part in close cooperation with the end users. First, test a few items manually; then 10; then 100; and if you and the end users are happy, turn it on.
I know from experience this can work very well. The close cooperation with the end users allows you to understand their needs better, and generally add features that they really appreciate.
A word of warning: although users will think of you as someone who gets thinks done, Enterprise Architects will think of you as a hacker. Depending on the organization you're in, this can be either very healthy or very unhealthy :)

How should methods updating database tables be unit tested?

I have an application that is database intensive. Most of the applications methods are updating data in a database. Some calls are wrappers to stored procedures while others perform database updates in-code using 3rd party APIs.
What should I be testing in my unit tests? Should I...
Test that each method completes without throwing an exception -or-
Validate the data in the database after each test to make sure the state of data is as expected
My initial thought is #2 but my concern is that I would be writing a bunch of framework code to go along with my unit tests. I read that you shouldn't write a bunch of framework code for unit testing.
Thoughts?
EDIT: What I mean by framework is writing a ton of other code that serves as a library to the unit testing code...not a third party framework.
I do number 2, i.e., test the update by updating a record, and then reading it back out and verifying that the values are the same as the ones you put in. Do both the update and the read in a transaction, and then roll it back, to avoid permanent effect on the database. I don't think of this as testing Framework code, any more than I think of it as testing OS code or networking code... The framework (if you mean a non-application specific Database access layer component) should be tested and validated independently.
There's a third option, which is to use a mock database-access object that knows how to respond to an update as if it had been connected to a live database, but it doesn't really execute the query against a database.
This technique can be used to supplement testing against a live database. This is not the same as testing against a live database, and shouldn't substitute for that kind of testing. But it can be used at least to test that the invocation of the database update by your class was done with proper inputs. It also typically runs a lot faster than running tests against a real database.
You must test the actual effect of the code on the data, and its compliance with the validation rules etc., not just that no exceptions are raised - that would be a bit like just checking a procedure compiles!
It is difficult testing database code that performs inserts, updates or deletes (DML), because the test changes the environment it runs in, i.e. the database. Running the same procedure several times in a row could (and probably should) have different results each time. This is very different to unit testing "pure code", which you can run thousands of times and always get the same result - i.e. "pure code" is deterministic, database code that performs DML is not.
For this reason, you do often need to build a "framework"to support database unit tests - i.e. scripts to set up some test data in the right state, and to clean up after the test has been run.
If you are not writing to the database manually and using a framework instead (jvm, .net framework, ...), you can safely assume that the framework writes to database correctly. What you must test is if you are using the framework correctly.
Just mock the database methods and objects. Test if you are calling them and retrieving data back correctly. Doing this will give you the opportunity to write your tests easier, run them much more faster and make them parallel with no problems at all.
They shouldn't be unit tested at all! The whole point of those methods is to integrate with the outside world (i.e. the database). So, make sure your integration tests beat the you-know-what out of those methods and just forget about the unit tests.
They should be so simple that they are "obviously bug-free", anyway – and if they aren't, you should break them up in one part which has the logic and a dumb part which just takes a value and sticks it in the database.
Remember: the goal is 100% test coverage, not 100% unit test coverage; that includes all of your tests: unit, integration, functional, system, acceptance and manual.
If the update logic is complex then you should do #2.
In practice the only way to really unit test a complex calculation and update
like say, calculating the banking charges on a set of customer transactions,
is to intialise a set of tables to known values at the start of your
unit test and test for the expected values at the end.
I use DBUnit to load the database with data, execute the update-logic and finally read the updated data from the database and verify it. Basically #2.

Unit-Testing Databases

This past summer I was developing a basic ASP.NET/SQL Server CRUD app, and unit testing was one of the requirements. I ran into some trouble when I tried to test against the database. To my understanding, unit tests should be:
stateless
independent from each other
repeatable with the same results i.e. no persisting changes
These requirements seem to be at odds with each other when developing for a database. For example, I can't test Insert() without making sure the rows to be inserted aren't there yet, thus I need to call the Delete() first. But, what if they aren't already there? Then I would need to call the Exists() function first.
My eventual solution involved very large setup functions (yuck!) and an empty test case which would run first and indicate that the setup ran without problems. This is sacrificing on the independence of the tests while maintaining their statelessness.
Another solution I found is to wrap the function calls in a transaction which can be easily rolled back, like Roy Osherove's XtUnit. This work, but it involves another library, another dependency, and it seems a little too heavy of a solution for the problem at hand.
So, what has the SO community done when confronted with this situation?
tgmdbm said:
You typically use your favourite
automated unit testing framework to
perform integration tests, which is
why some people get confused, but they
don't follow the same rules. You are
allowed to involve the concrete
implementation of many of your classes
(because they've been unit tested).
You are testing how your concrete
classes interact with each other and
with the database.
So if I read this correctly, there is really no way to effectively unit-test a Data Access Layer. Or, would a "unit test" of a Data Access Layer involve testing, say, the SQL/commands generated by the classes, independent of actual interaction with the database?
There's no real way to unit test a database other than asserting that the tables exist, contain the expected columns, and have the appropriate constraints. But that's usually not really worth doing.
You don't typically unit test the database. You usually involve the database in integration tests.
You typically use your favourite automated unit testing framework to perform integration tests, which is why some people get confused, but they don't follow the same rules. You are allowed to involve the concrete implementation of many of your classes (because they've been unit tested). You are testing how your concrete classes interact with each other and with the database.
DBunit
You can use this tool to export the state of a database at a given time, and then when you're unit testing, it can be rolled back to its previous state automatically at the beginning of the tests. We use it quite often where I work.
The usual solution to external dependencies in unit tests is to use mock objects - which is to say, libraries that mimic the behavior of the real ones against which you are testing. This is not always straightforward, and sometimes requires some ingenuity, but there are several good (freeware) mock libraries out there for .Net if you don't want to "roll your own". Two come to mind immediately:
Rhino Mocks is one that has a pretty good reputation.
NMock is another.
There are plenty of commercial mock libraries available, too. Part of writing good unit tests is actually desinging your code for them - for example, by using interfaces where it makes sense, so that you can "mock" a dependent object by implmenting a "fake" version of its interface that nonetheless behaves in a predictable way, for testing purposes.
In database mocks, this means "mocking" your own DB access layer with objects that return made up table, row, or dataset objects for your unit tests to deal with.
Where I work, we typically make our own mock libs from scratch, but that doesn't mean you have to.
Yeah, you should refactor your code to access Repositories and Services which access the database and you can then mock or stub those objects so that the object under test never touches the database. This is much faster than storing the state of the database and resetting it after every test!
I highly recommend Moq as your mocking framework. I've used Rhino Mocks and NMock. Moq was so simple and solved all the problems I had with the other frameworks.
I've had the same question and have come to the same basic conclusions as the other answerers here: Don't bother unit testing the actual db communication layer, but if you want to unit test your Model functions (to ensure they're pulling data properly, formatting it properly, etc.), use some kind of dummy data source and setup tests to verify the data being retrieved.
I too find the bare-bones definition of unit testing to be a poor fit for a lot of web development activities. But this page describes some more 'advanced' unit testing models and may help to inspire some ideas for applying unit testing in various situations:
Unit Test Patterns
I explained a technique that I have been using for this very situation here.
The basic idea is to exercise each method in your DAL - assert your results - and when each test is complete, rollback so your database is clean (no junk/test data).
The only issue that you might not find "great" is that i typically do an entire CRUD test (not pure from the unit testing perspective) but this integration test allows you to see your CRUD + mapping code in action. This way if it breaks you will know before you fire up the application (saves me a ton of work when I'm trying to go fast)
What you should do is run your tests from a blank copy of the database that you generate from a script. You can run your tests and then analyze the data to make sure it has exactly what it should after your tests run. Then you just delete the database, since it's a throwaway. This can all be automated, and can be considered an atomic action.
Testing the data layer and the database together leaves few surprises for later in the
project. But testing against the database has its problems, the main one being that
you’re testing against state shared by many tests. If you insert a line into the database
in one test, the next test can see that line as well.
What you need is a way to roll back the changes you make to the database.
The TransactionScope class is smart enough to handle very complicated transactions,
as well as nested transactions where your code under test calls commits on its own
local transaction.
Here’s a simple piece of code that shows how easy it is to add rollback ability to
your tests:
[TestFixture]
public class TrannsactionScopeTests
{
private TransactionScope trans = null;
[SetUp]
public void SetUp()
{
trans = new TransactionScope(TransactionScopeOption.Required);
}
[TearDown]
public void TearDown()
{
trans.Dispose();
}
[Test]
public void TestServicedSameTransaction()
{
MySimpleClass c = new MySimpleClass();
long id = c.InsertCategoryStandard("whatever");
long id2 = c.InsertCategoryStandard("whatever");
Console.WriteLine("Got id of " + id);
Console.WriteLine("Got id of " + id2);
Assert.AreNotEqual(id, id2);
}
}
If you're using LINQ to SQL as the ORM then you can generate the database on-the-fly (provided that you have enough access from the account used for the unit testing). See http://www.aaron-powell.com/blog.aspx?id=1125

Resources