I want each of my unit (integration) test methods to use a clean and consistent database and specific test data for each test.
Could you please provide me some code samples/snippets and tell me what are best practices for the following questions for both scenarios EF 5 database first and model first.
How to create the database for each test method?
How to set up the test data for each test method?
How to delete the database for each test method?
SSDT project is used to handle database schema, how to use the current SSDT schema for each test run? So that the tests are always executed against the current development version of the database.
Please consider the following assumptions for above questions:
The unit tests shall be executed locally on dev machine and on server CI builds.
Each test may have different test data.
Manually defined .mdf test files should be avoided because several developers are working on the product and there is a potential risk that one deleoper overwrites the changes of the .mdf file which another developer may have checked in previously -> development process should be as simpel as possible.
SSDT is used, so maybe this an option to create the database (probably not a good one because I want the database to be created for each test) and I have no deep knowledge yet about SSDT possiblities.
A good performance of test execution time would be nice to have.
VS/TFS 2012 is used.
SQL Server 2012 is used.
Application is a C# desktop application.
Mocking EF context etc. is not an option.
I hope you can guide me into the right direction how to solve the 4 questions from above. I don´t know if EF provides some functionality (I think only for code first) for my challenges or if this all must be solved by executing SQL scripts or something like that.
Thanks!
Related
I have a new project that needs SQL Server unit test, and CI/CD with VSTS.
Below is the features that are required
SQL server unit test against stored procedure, initial target tables setup for each test and clean up
Unit test in sql
CI/CD with VSTS and Git
Easy setup and easy to use
I looked into SSDT 2017, which seems good. But it seems it lacks a feature where common setup script can be shared easily between each test in Pre-Test step. It might lack other features that should be available for daily usage. But I might be wrong.
Which tool fits better for general sql server unit testing in 2017?
SQL Server Data Tools for Visual Studio
TSQLT
One of the reasons why there aren't more unit testing solutions out there for SQL development is because proper unit testing is inherently harder with databases so people don't do it. This is because databases maintain state and also referential integrity. Imagine writing a unit test for a stored procedure (order_detail_update_status) that updates a status flag on an order_detail table. The order_detail table has a dependency on the order_header and product tables, order_header in turn has foreign keys to customer and employee whilst the product table may depend on product_category, product_type and supplier. That is at least seven tables (probably more) that need to be populated with valid data just to write one test and all but one of those tables have nothing to do with the code under test.
So what you should be looking for in a unit testing solution is exactly that - the ability to test discrete units of code - with the minimum of set-up. So ideally, you would be able to just set up the required test data in order_detail and ignore the rest of the tables - I am aware of only one testing framework that allows you to do that.
Additionally, unit tests should have minimal reasons to fail, in the above example, order_detail_update_status just updates a single row on the order_detail table. If a new not null column is added to the customer table, which is not handled by the test set-up then you have a scenario where our test could fail for a totally unrelated reason. This makes for very brittle tests and, under the pressure of tight delivery deadlines, developers will quickly give up writing and maintaining tests.
A suite of unit tests should be runnable in any order, with no interdependencies and a good test framework should support this along with set-up, tear down and support for mocking objects (which may or may not be part of the same framework). In the above scenario, the ability to mock the order_detail table to test a module that only touches the order_detail table is one of the most important features if you don't want to spend huge amounts of time fixing tests that are failing for no "good" reason.
So in terms of your requirements, and the above points, there is only one framework that I am aware of that does all of this - tSQLt. This is based on real-world experience - we had over 6,000 tSQLt unit tests on my last project. It includes the following feautures:
Unit test stored procedures, functions and views
Mock tables, views and functions
Mock (or spy) stored procedures - either for isolation,
replay or pre-defined outcomes
Suite set-up
Automatic tear-down (as every test runs in it's own translation)
Unit tests are completely isolated and can be executed in any order
It works very well with VSTS in a CI/CD and, as all the unit tests are written in T-SQL, it is very easy to use.
The best way to use tSQLt in Visual Studio is to make use of composite projects - where application database objects and modules are maintained in one project whilst the tSQLt framework and all unit tests are part of a second project. There is a good aticle to get you started on this here.
I wrote a more detailed article on the benefits of tSQLt for Simple-Talk a few year back which might also be helpful
Note that Microsoft is promoting slacker, see e.g. Channel 9: SQL Server Database Unit Testing in your DevOps pipeline. We found it to work reasonably well in an Azure SQL setup. For tSQLt on Azure SQL I remember some issues around enabling CLR and TRUSTWORTHY options but also saw that it should still work, e.g. here:
Nikolai Thomassen: Azure SQL unit testing with tSQLt using Azure DevOps
SQLShack: SQL unit testing with the tSQLt framework for beginners
You can re-use scripts, you can do a lot of things. Quick answer to your question is just use tSQLt. There is no other unit testing frameworks for SQL Server to be so powerful/flexible and easy to use than tSQLt so far. Just start using and that's it. It is quite easy and quick to setup in SSDT. #datacentricity wrote you enough about that framework and if you are want to know more, then read the article he provided.
I'll just add few things to make your life a bit easier if you'll go tSQLt direction:
Use synonyms for all cross database or linked objects;
Create all the tSQLt objects using the standard script in SSMS and then import the objects into the SSDT
Create separate project for tSQLt objects and mark it as "the same database" as your database you are willing to test
Create pre-script in the tSQLt project and run the pre-script of your original database project from there
Create post-script in tSQLt project and run the post-script of your original database project from there
In the tSQLt post-script as the last statement write "EXEC tSQLt.RunAll"
Create publish script in tSQLt project and in the settings be sure that it will deploy "extended properties"
Make sure that all test classes (schemas) have extended properties statements
There might some other nuances but just start with something and I am pretty sure that you'll start loving tSQLt very soon.
I am searching for days now on several forums, blogs, MSDN etc. but I was not able to find any guidance on this topic so far. I will try to explain this post in a more detailed manner because I think information and documentation of SSDT development is not well documented and there exists no best practice document like VS 2010 database projects (http://vsdatabaseguide.codeplex.com/).
I am a C# developer (no DBA) and we are in the beginning of the development phase of a green field project (10 – 15 developers) and we are currently defining our development process including handling of database development.
The technology and tool chain we want to use:
EF 5 (model first, maybe we change this to database first because issues like views, indexes etc. are much easier to handle)
SSDT (SQL Server Data Tools)
VS 2012 / TFS 2012
MS Test for automated unit / integration tests
The development process is based on test driven development and looks like this:
Each feature is developed by one developer on a separate feature branch
Design and implement unit tests (=feature implementation)
If a feature requires database access then the developer has to
a) create / update the EF model
b) create the localDB database via EF´s „Generate database from model“
c) create / update the SSDT project via schema compare
d) create the unit tests with a test initialize method that creates a fresh database and according test data for each test
Merge the feature branch back into integration branch
After checking in the merge the CI build executes the unit / integration tests
So there are some points I am not 100% sure about how to solve them (especially database handling with unit tests) and I would appreciate if you can put me in the right direction:
How to solve the database creation for the automated unit tests:
a) Execute the SQL database generation script (which can be manually created previously via SSDT publish feature) for each executed test method? This is the option I would prefer because each test has a clean and consistent database state for each test. Is there a performance problem creating the localdb database for each test?
b) Or use msbuild task „SQLPublish“ or „sqlPackage.exe“? I think this is not the option to go for because this would be a one time thing and I want to create a new test database for each unit test.
c) Or create the test database manually and save the *.mdf file to the root of the source control folder and create a copy for each test? But this I would not prefer because a developer A could override the file which could have changes from another developer B who checked in his changes before. And this means that the developer
How to solve test data creation for the automated unit tests:
a) Execute test specific SQL script that inserts the appropriate test data for each test. I think this also means to create a new database as mentioned in point 1. Again this is my preferred option.
b) Or using EF to create test data seems not to be a clean way, because this depends on the EF model implementation which should actually be tested implicitly through the feature unit tests.
c) Or use manually created test database files. But this would make the development process more complicate for the developer. And this could also be overriden by other developers check ins.
Maybe its good to mention what we expect from our unit tests to be.The goal of our unit tests is not to test the database schema like stored procedures and so on. We want to test parts of our application features using „code“ unit tests that can also be seen as integration tests.
So does anyone of you have a similar development process and what are your experiences?
Any recommendations to improve our development process?
Are there any resources out there or best practices documents on SSDT development?
And the most important question for me, how did you solve the automated unit testing including proper database handling and integration tests?
When you need a database, it isn't a unit test. For unit testing in combination with entity framework you should use a faked dbcontext.
I'm trying to find out a proper database development process in my applications. I've tried Visual Studio Database projects with Post/Pre deployment scripts (very nice feature), Entity Framework Database First approach (with separate script for each database change placed under source control), and now I'm dealing with Entity Framework Code First approach. I have to say that I'm really impressed with the possibilities that it gives, but I'm trying to figure out how to manage the changes in the models during the development. Assuming that I have the following environments in my company:
LOCALHOST - for each single developer,
TEST - single machine with SQL Server database for testing purposes,
PRODUCTION - single machine with SQL Server database used by clients
Now each time when I'm working on an application and the code changes, it's ok for me to drop and recreate the database each time when I'm testing an application (so for LOCALHOST and TEST environments). I've created proper database initializers that seeds the database with test data and I'm pretty happy with them.
However with each new build when model changes, I want to handle the PRODUCTION database changes in such a way that I won't lost the whole data. So, in Visual Studio 2012 there is the "SQL Schema Compare" tool and I'm just wondering if it is not enough to manage all changes in the database for PRODUCTION development? I can compare my {local} database schema with PRODUCTION schema and simply apply all changes?
Now, I want to ask what's the point of Code First Migrations here? Why should I manage all changes in the database through it? The only reason I can find is to allow to perform all sort of "INSERT" and "UPDATE" commands. However I think that if database is correctly designed there shouldn't be such need to perform these commands. (It's topic for another discussion so I don't want to go into details). Anyway I want to ask - what are the real advantages of Code First Migrations over the Code First + Schema Compare pattern?
It simplifies deployment. If you didn't manage the migrations in code, then you would have to run the appropriate delta scripts manually on your production environment. With EF migrations, you can configure your application to migrate the database automatically to the latest version on start up.
Typically, before EF migrations, if you wanted to automate this you would either have to run the appropriate delta scripts during a custom installation routine, or write some infrastructure into your application which runs the delta scripts in code. This would need to know the current database version, so that it knows which of the scripts to run, which you would normally have in a DbVersion table or something similar. With EF migrations, this plumbing is already in place for you.
Using migrations means the alignment of model and database changes is automated and therefore easier to manage.
I'm using Migrator.NET to write database migrations for the application. Marc-André Cournoyer wrote:
Like any code in your application you
must test your migrations. Ups and downs code. Do it part of your
continuous build process and test it
on as many different databases and
environment as you can.
How do I do that? Say I have the Up() method which creates a table and the Down() method which drops the same table and I'm using SQL Server. How would a test look like? Should I be running SQL query against the system tables, like select * from sys.columns, to check if the table was created and that it has the proper structure? What if we're using NHibernate?
EDIT
I mean migrations in the Rails ActiveRecord Migrations sense (creating, modifying and tearing down databases in small steps based on C# code).
EDIT 2
And here's where I read about that we should test migrations. The blog post is actually linked from Migrator's wiki.
Do you test your DAL - some sort of integration test?
You need more than a migration script, you also need a baseline script. When you want to test a database upgrade, you should run all the scripts from the baseline on a testing/staging server to create the newest version of the database. Then test your DAL against the up-to-date test database. If all the DAL tests succeed then your migration should have been successful (otherwise your DAL tests are not complete enough).
It's an expensive test to run, but it's pretty much rock solid. I'll personally admit to doing a lot of this manually at the moment; we have an in-house migration tool that will apply all scripts (including the baseline), so the test database setup and DAL tests are separate steps. It works though. If you want to make sure that a table was created, there's no better method than to actually try to insert data into it!
You can try to verify the results by looking at system catalogs and INFORMATION_SCHEMA views and so on, but ultimately the only way to be sure it's actually working is to try to use the new objects. Just because the objects are there doesn't mean that they're functional.
maybe this scrip can help you :
http://www.benzzon.se/forum/uploads/benzzon/2006-03-27_134824_sp_CompareDB.txt
this script compare two db.(structure and data)
Source control is for taking a snapshot of your current code base. Migration is for moving changing your database from one version to the next. So that at some future point you can take an old database, apply migrations and work with the latest code base.
I've never seen the actual migrations tested. I have seen the results tested, and they have caught/reminded me to run the latest migrations.
describe User do
it { should have_column :name, :type => :string }
it { should validate_presence_of :name}
end
So someone changes the model. Adds a test to reflect the model. Adds the migration. Then commits the source.
You grab the latest, run tests. Tests fail because the database doesn't correspond. You remember to run migrations, then rerun tests. Success.
Treat migrations testing as part of your overall persistence testing strategy if using NHibernate, i.e. if you can create and save all of your entities without any errors, your database and your mappings should be correct.
You COULD do a comparison of database system objects, but you would need to have a target against which to compare - otherwise how would you know if passed or failed?
I think you may be better off creating a set of edge case CRUD operation test cases that exercise the entities or operations in the data layer. If any of these fail, the database is not in sync with what is required. i.e. if the insert of an field char(20) fails because it is only char(15) in the database. Then the db structure comparison can be done to see what if off.
You may be able to short circuit this by focusing only on the recently changed items, and assuming prior changes have been applied.
I'm looking for an answer to this as well. I think this should be tested in an integration environment rather than a unit test one: For unit tests (DAL) I drop the database and re-create it.
However, ideally I'd like to have an integration environment were my DB is replicated from production and DB migration scripts run both ways:
Upwards to ensure a smooth upgrade of production and Downwards to ensure rollbacks are possible.
For several years I have been using a testing tool called qmTest that allows me to do test-driven database development for some Firebird databases. I write a test for a new feature (table, trigger, stored procedure, etc.) until it fails, then modify the database until the test passes. If necessary, I do more work on the test until it fails again, then modify the database until the test passes. Once the test for the feature is complete and passes 100% of the time, I save it in a suite of other tests for the database. Before moving on to another test or a deployment, I run all the tests as a suite to make sure nothing is broken. Tests can have dependencies on other tests, and the results are recorded and displayed in a browser.
Nothing new here, I am sure.
Our shop is aiming toward standardizing on MSSQLServer and I want to use the same procedure for developing our databases. Does anyone know of tools that allow or encourage this kind of development? I believe the Team System does, but we do not own that at this point, and probably will not for some time.
I am not opposed to scripting, but would welcome a more graphical environment.
Any suggestions?
Team System is probably the best-known solution, but you could also try TSQLUnit (SourceForge).
I haven't used it myself, but this article does a decent job of introducing it.
Checkout http://www.sqlservercentral.com/articles/Testing/66553/
and http://www.sqlservercentral.com/articles/Database+Design/66845/
This is a fairly crude article about doing everything within T-SQL.
Have you thought about using NHibernate and using TestDriven or similar just for the tests?
On projects where I didn't have access to team system for db pro's, I have used sql scripts combined with msbuild and the sdc tasks library for msbuild (http://www.codeplex.com/sdctasks). The msbuild script calls on an sdc task to run my sql scripts in a particular order (e.g. create db, create tables etc...) and on a particular connection string. The scripts always check if an object exists and do teardown first and build it back up.
The sql and msbuild scripts I place in a regular visual studio database project (which does nothing special, so you could choose to use a simple empty project), so everything is source-controlled.
with such a set of scripts , you can setup a new database for each test run. You can then use insert scripts to populate it with data and run unit tests against it.
These scripts are also useful for setting up databases from scratch in different environments (DEV/TST/QUA/...)
I was able to adequately apply a test driven development style against SQL Server databases using TSQLUnit. I followed the same flow as you described with writing a unit test sproc first that fails and then making the changes necessary for the test to pass. Over time, I also built up a suite of tests when executed validated that nothing broke while making any new changes.
There was some tough spots (including extreme difficulties in writing tests for existing sprocs) but it worked especially for schema changes. However, I would recommend looking at T.S.T. the T-SQL Test Too1 which unlike TSQLUnit (I had to roll my own) has built-in support for assertions.