Related
I am using the Dapper QueryMultipleAsync to split the result of a SQL query into a number of different objects.
However I am having trouble finding a way to unit test the method that this call is in. I have been using Fake it Easy to fake out elements for the unnit tests. As part of the tests I wanted to fake the QueryMultipleAsync to check other calls are made, however which ever way and at whatever level I try this I seam to get errors.
Does anyone have any experience trying to fake out this dapper element? If so how did you do it?
I agree with #Marc Gravell's comment, in that I'd encapsulate all the data access code and test it via integration tests. One other point is that QueryMultipleAsync is not a virtual method, so it cannot be faked by FakeItEasy; adding a fakeable layer of abstraction is the only way to isolate that call from the code you want to test.
I have inherited a Web API 2 project written in C#/.NET that uses ADO.NET to access an SQL Server database.
The data access layer of the project contains many methods which look similar to this:
public class DataAccessLayer
{
private SqlConnection _DBConn;
public DataAccessLayer()
{
_DBConn = new SqlConnection(ConfigurationManager.ConnectionStrings["DefaultConnection"].ConnectionString);
}
public string getAllProductsAsJSON()
{
DataTable dt = new DataTable();
using (SqlConnection con = _DBConn)
{
using (SqlCommand cmd = new SqlCommand("SELECT productId, productName FROM product ORDER BY addedOn DESC", con))
{
cmd.CommandType = CommandType.Text;
// add parameters to the command here, if required.
con.Open();
SqlDataAdapter da = new SqlDataAdapter(cmd);
da.Fill(dt);
return JsonConvert.SerializeObject(dt);
}
}
}
// ... more methods here, but all basically following the above style of
// opening a new connection each time a method is called.
}
Now, I want to write some unit tests for this project. I have studied the idea of using SQL transactions to allow for insertion of mock data into the database, testing against the mock data, and then rolling back the transaction in order to allow for testing against a "live" (development) database, so you can have access to the SQL Server functionality without mocking it out completely (e.g. you can make sure your views/functions are returning valid data AND that the API is properly processing the data all at once). Some of the methods in the data access layer add data to the database, so the idea is that I would want to start a transaction, call a set of DAL methods to insert mock data, call other methods to test the results with assertions, and then roll back the entire test so that no mock data gets committed.
The problem I am having is that, as you can see, this class has been designed to create a new database connection every single time that a query is made. If I try to think like the original developer probably thought, I could see how it could make at least some sense to do this, considering the fact that these classes are used by a web API, so a persistent database connection would be impractical especially if a web API call involves transactions, because you then do need a separate connection per request to maintain separation.
However, because this is happening I don't think I can use the transaction idea to write tests as I described, because uncommitted data would not be accessible across database connections. So if I wrote a test which calls DAL methods (and also business-logic layer methods which in turn call DAL methods), each method will open its own connection to the database, and thus I have no way to wrap all of the method calls in a transaction to begin with.
I could rewrite each method to accept an SQLConnection as one of its parameters, but if I do this, I not only have to refactor over 60 methods, but I also have to rework every single place that such methods are called in the Web API controllers. I then have to move the burden of creating and managing DB connections to the Web API (and away from the DAL, which is where it philosophically should be).
Short of literally rewriting/refactoring 60+ methods and the entire Web API, is there a different approach I can take to writing unit tests for this project?
EDIT: My new idea is to simply remove all calls to con.Open(). Then, in the constructor, not just create the connection but also open it. Finally, I'll add beginTransaction, commitTransaction and rollbackTransaction methods that operate directly upon the connection object. The core API never needs to call these functions, but the unit tests can call them. This means the unit test code can simply create an instance, which will create a connection which persists across the entire lifetime of the class. Then it can use beginTransaction, then do whatever tests it wants, and finally rollbackTransaction. Having a commitTransaction is good for completeness and also exposing this functionality to the business-logic layer has potential use.
There are multiple possible answers to this question, depending on what exactly you are trying to accomplish:
Are you primarily interested in unit testing your application logic (e.g., controller methods), rather than the data access layer itself?
Are you looking to unit test the logic inside your data access layer?
Or are you trying to test everything together (i.e., integration or end-to-end testing)?
I am assuming you are interested in the first scenario, testing your application logic. In that case, I would advise against connecting to the database at all (even a development database) in your unit tests. Generally, unit tests should not be interacting with any outside system (e.g., database, filesystem, or network).
I know you mentioned you were interested in testing multiple parts of the functionality all at once:
I have studied the idea of using SQL transactions [...] so you can have access to the SQL Server functionality without mocking it out completely (e.g. you can make sure your views/functions are returning valid data AND that the API is properly processing the data all at once).
However, that rather goes against the philosophy of unit testing. The whole point of a unit test is to test a single unit in isolation. Typically, this unit ("System Under Test", or SUT, in more technical terms) is a single method inside some class (for instance, an action method in one of your controllers). Anything other than the SUT should be stubbed or mocked out.
To accomplish this, broadly speaking, you will need to refactor your code to use dependency injection, and also use a mocking framework in your tests:
Dependency Injection: If you are not using a dependency injection framework already, chances are your controller classes are instantiating your DataAccessLayer class directly. This approach will not work for unit tests - instead, you will want to refactor the controller class to accept its dependencies via the constructor, and then use a dependency injection framework to inject the real DataAccessLayer in your application code, and inject a mock/stub implementation in your tests. Some popular dependency injection frameworks include Autofac, Ninject, and Microsoft Unity. Depending on which framework you choose, this may also require that you refactor DataAccessLayer a bit so it implements an interface (e.g., IDataAccessLayer).
Mocking Framework: In your tests, rather than using the real DataAccessLayer class directly, you will instead create a mock, and set up expectations on that mock. Some popular mocking frameworks for .NET include Moq, RhinoMocks, and NSubstitute).
Granted, if the code was not initially written with unit testing in mind (i.e., no dependency injection), this may involve a fair amount of refactoring. This is where alltej's suggestion comes in with creating a wrapper for interacting with the legacy (i.e., untested) code.
I strongly recommend you read the book The Art of Unit Testing: With Examples in C# (by Roy Osherove). That will help you understand the ideology behind unit testing a bit better.
If you are actually interested in testing multiple parts of your functionality at once, then what you are describing (as others have pointed out) is integration, or end-to-end testing. The setup for this would be entirely different (and often more challenging), but even then, the recommended approach would be to connect to a separate database (specifically for integration testing, separate even from your development database), rather than rolling back transactions.
When working with legacy system, what I would do is create a wrapper for this DLLs/projects to isolate communication with legacy and to protect integrity of your new subsystem/domain or bounded context. This isolation layer is known as anticorruption layer in DDD terminology. This layer contains interfaces written in terms of your new bounded context. The interface adapts and interacts with your API layer or to other services in the domain. You can then write unit/mock tests with this interfaces. You can also create an integration tests from your anticorruption layer which will eventually call the database via the legacy dlls.
Actually, from what I see in the code, the DAL creates only one connection in the constructor and then it keeps using it to fire commands, one command per method in the DAL. It will only create new connections if you create another instance of the DAL class.
Now, what you are describing is multiple test into, integration, end to end and I am not convinced that the transaction idea, while original, is actually doable.
When writing integration tests, I prefer to actually create all the data required by the test and then simply remove it at the end, that way nothing is left behind and you know for sure if your system works or not.
So imagine you're testing retrieving account data for a user, I would create a user, activate them, attach an account and then test against that real data.
The UI does not need to go all the way through, unless you really want to do end to end tests. If you don't, then you can just mock the data for each scenario you want to test and see how the UI behaves under each scenario.
What I would suggest is that you test your api separately, test each endpoint and make sure it works as expected with integration tests covering all scenarios needed.
If you have time, then write some end to end tests, possibly using a tool like Selenium or whatever else you fancy.
I would also extract an interface from that DAL in preparation of mocking the entire layer when needed. That should give you a good start in what you want to do.
Use case:
I'm writing system tests using Geb/Selenium (so outside of angular).
I want to decorate $http to log all requests/responses at run time.
and here's the catch: without touching the source code.
Before you rush to answer "use $provide#decorator", for example,
http://blog.xebia.com/2014/08/08/extending-angularjs-services-with-the-decorate-method/
That solution for this use case means adding a test hook into production code... that's normally a bad thing I want to avoid if possible.
Update: Geb allows you to run Javascript in the browser window. So just for the heck of it I ran the tutorial code to decorate $http. Unfortunately, it didn't work because apparently you can't re-config the app after it's been loaded. But even if it did work, this brings up another interesting point---I need to override $http before any modules have had a chance to use it.
Since decorating $http service would be the cleanest way of doing this, you can avoid polluting production code by using something like ng-constants and gulp/grunt to only add decoration code for a 'test' environment.
See related Q/A here: How do I configure different environments in Angular.js?
If you are inclined on changing this at runtime(where runtime takes place in a test environment), you may need to go 'closer to the metal' and deal with XMLHttpRequests: Add a "hook" to all AJAX requests on a page
I am learning to write unit tests for my angular app. My controller has several dependencies upon Resources, factories, services etc
angular.module('app').controller('Ctrl1',['$scope','Factory1','Factory2','Resource1','Resource2' ... and so on
The Resource1, Resource2, etc of course fetch data from the server. Several of these resources are used to fetch data from the server and initialize $scope.
After reading innumerous tutorials all over the net, I have a few queries on the right way to write my jasmine tests
In the beforeEach section of the jasmine test, am I suppose to provide all dependencies right away or should I provide only the ones I care about testing
What I want to test is that Resource1 gets called and fetches some data and intializes some part of $scope then Resource2 gets called and fetches some data and initializes some other part of scope etc
What is the right way to perform the above. I mean am I actually suppose to fetch the data in the test or should I be using some mock http service. I know tutorials mention that we should use mock http service but then how will this test my controller since I am not actually fetching the right data.
This part is really confusing and I have yet to find a blog/article that explains this clearly (I might just write one once I figure things out.. I am sure others are confused too)
Where to Provide Dependencies
You should provide all of your dependencies in your first beforeEach statement. I mock/fake mine with SinonJs. This helps you take advantage of angular's dependency injection to isolate each piece of your application. You should never call a dependency and expect an actual instance of it to return data in a unit test, as that would increase the coupling of your code and make it far more brittle.
Mocking Resource Calls
For resource calls, I simply create a fake resource object with promises and whatnot included. You can then resolve or reject those promises and provide fake data to test your controller logic.
In the plunk below, I've essentially mocked out a whole promise chain. You simply tell your tests to either reject or resolve those promises, faking a successful or failure call to the resource. You then have to make sure your scope cycles with scope.$apply(). I actually forgot to do this which caused me quite a bit of trouble just now.
Conclusion
Here is the Plunk. Let me know if you need to see how I test the actual resource code in my repositories. In those services I have to actually mock out the HTTP calls, which Angular makes extremely easy.
I'm not sure any of this is "Best Practice" but it has worked for me. I learned the basics from looking at other people's source code and watching this Pluralsite video AngularJS Fundamentals which has a very small section on testing.
Useful Resources
Testing AngularJS Directives. This is the hardest thing to test and understand in Angular. Or at least it was for me.
This one is on Dependency Injection in Angular. I have it marked
about where they start talking about unit testing.
This Plural Sight Course got me started with testing JavaScript in general. Very helpful for learning Jasmine if you are new to it.
AngularJS Github repo is very useful if you want to see Jasmine tests in action. Here is a set of tests that simulates a HTTP Backend.
I'm near the beginning of a new project and (gasp!) for the first time ever I'm trying to include unit tests in a project of mine.
I'm having trouble devising some of the unit tests themselves. I have a few methods which have been easy enough to test (pass in two values and check for an expected output). I've got other parts of the code which are doing more complex things like running queries against the database and I'm not sure how to test them.
public DataTable ExecuteQuery(SqlConnection ActiveConnection, string Query, SqlParameterCollection Parameters)
{
DataTable resultSet = new DataTable();
SqlCommand queryCommand = new SqlCommand();
try
{
queryCommand.Connection = ActiveConnection;
queryCommand.CommandText = Query;
if (Parameters != null)
{
foreach (SqlParameter param in Parameters)
{
queryCommand.Parameters.Add(param);
}
}
SqlDataAdapter queryDA = new SqlDataAdapter(queryCommand);
queryDA.Fill(resultSet);
}
catch (Exception ex)
{
//TODO: Improve error handling
Console.WriteLine(ex.Message);
}
return resultSet;
}
This method essentially takes in all the necessary bits and pieces to extract some data from the database, and returns the data in a DataTable object.
The first question is probably the most complex: What should I even test in a situation like this?
Once that's settled comes the question of whether or not to mock out the database components or try to test against the actual DB.
What are you testing?
There are three possibilities, off the top of my head:
A. You're testing the DAO (data access object) class, making sure it's correctly marshaling the values/parameters being passed to the database,, and correctly marshaling/transforming/packaging results gotten frm the database.
In this case, you don't need to connect to the database at all; you just need a unit test that replaces the database (or intermediate layer, eg., JDBC, (N)Hibernate, iBatis) with a mock.
B. You're testing the syntactic correctness of (generated) SQL.
In this case, because SQL dialects differ, you want to run the (possibly generated) SQL against the correct version of your RDBMS, rather than attempting to mock all quirks of your RDBMS (and so that any RDBMS upgrades that change functionality are caught by your tests).
C. You're testing the semantic correctness of your SQL, i.e, that for a given baseline dataset, your operations (accesses/selects and mutations/inserts and updates) produce the expected new dataset.
For that, you want to use something like dbunit (which allows you to set up a baseline and compare a result set to an expected result set), or possibly do your testing wholly in the database, using the technique I outline here: Best way to test SQL queries.
This is why (IMHO) unit tests can sometimes create a false sense of security on the part of developers. In my experience with applications that talk to a database, errors are commonly the result of data being in an unexpected state (unusual or missing values etc.). If you routinely mock up data access in your unit tests, you will think your code is working great when it is in fact still vulnerable to this kind of error.
I think your best approach is to have a test database handy, filled with gobs of crappy data, and run your database component tests against that. All the while remembering that your users will be much much better than you are at screwing up your data.
The whole point of a unit test is to test a unit (duh) in isolation. The whole point of a database call is to integrate with another unit (the database). Ergo: it doesn't make sense to unit test database calls.
You should, however, integration test database calls (and you can use the same tools you use for unit testing if you want).
For the love of God, don't test against a live, already-populated database. But you knew that.
In general you already have an idea of what sort of data each query is going to retrieve, whether you're authenticating users, looking up phonebook/org chart entries, or whatever. You know what fields you're interested in, and you know what constraints exist on them (e.g., UNIQUE, NOT NULL, and so on). You're unit testing your code that interacts with the database, not the database itself, so think in terms of how to test those functions. If it's possible for a field to be NULL, you should have a test that makes sure that your code handles NULL values correctly. If one of your fields is a string (CHAR, VARCHAR, TEXT, &c), test to be sure you're handling escaped characters correctly.
Assume that users will attempt to put anything* into the database, and generate test cases accordingly. You'll want to use mock objects for this.
* Including undesirable, malicious or invalid input.
Strictly speaking, a test that writes/reads from a database or a file system is not a unit test. (Although it may be an integration test and it may be written using NUnit or JUnit). Unit-tests are supposed to test operations of a single class, isolating its dependencies. So, when you write unit-test for the interface and business-logic layers, you shouldn't need a database at all.
OK, but how do you unit-test the database access layer? I like the advice from this book: xUnit Test Patterns (the link points to the book's "Testing w/ DB" chapter. The keys are:
use round-trip tests
don't write too many tests in your data access test fixture, because they will run much slower than your "real" unit tests
if you can avoid testing with a real database, test without a database
You can unit test everything except: queryDA.Fill(resultSet);
As soon as you execute queryDA.Fill(resultSet), you either have to mock/fake the database, or you are doing integration testing.
I for one, don't see integration testing as being bad, it's just that it'll catch a different sort of bug, has different odds of false negatives and false positives, isn't likely to be done very often because it is so slow.
If I was unit testing this code, I'd be validating that the parameters are build correctly, does the command builder create the right number of parameters? Do they all have a value? Do nulls, empty strings and DbNull get handled correctly?
Actually filling the dataset is testing your database, which is a flaky component out of the scope of your DAL.
For unit tests I usually mock or fake the database. Then use your mock or fake implementation via dependency injection to test your method. You'd also probably have some integration tests that will test constraints, foreign key relationships, etc. in your database.
As to what you would test, you'd make sure that the method is using the connection from the parameters, that the query string is assigned to the command, and that your result set returned is the same as that you are providing via an expectation on the Fill method. Note -- it's probably easier to test a Get method that returns a value than a Fill method the modifies a parameter.
The first question is probably the most complex: What should I even test in a situation like this?
Since your code code is basically a DAO/repository without any
business logic you need an integration test, not a unit test.
Unit test should test classes without external dependencies (like DB
or calls to other remote services).
You should always try to separate the business logic (your Domain
Model) code from infrastructure code then it will be easy to use unit
tests.
Be careful with Mocks, it can be a signal of bad design. It means
you business logic is mixed with infrastructure.
Check these patterns: "Domain Model", "Hexagonal Architecture", "Functional Core, Imperative Shell"
In order to do this properly though you would should use some dependency injection (DI), and for .NET there are several. I am currently using the Unity Framework but there are others that are easier.
Here is one link from this site on this subject, but there are others:
Dependency Injection in .NET with examples?
This would enable you to more easily mock out other parts of your application, by just having a mock class implement the interface, so you can control how it will respond. But, this also means designing to an interface.
Since you asked about best practices this would be one, IMO.
Then, not going to the db unless you need to, as suggested is another.
If you need to test certain behaviors, such as foreign key relationships with cascade delete then you may want to write database tests for that, but generally not going to a real database is best, esp since more than one person may run a unit test at a time and if they are going to the same database tests may fail as the expected data may change.
Edit: By database unit test I mean this, as it is designed to just use t-sql to do some setup, test and teardown.
http://msdn.microsoft.com/en-us/library/aa833233%28VS.80%29.aspx
On JDBC based project, JDBC connection can be mocked, so that tests can be executed without live RDBMS, with each test case isolated (no data conflict).
It allow to verify, persistence code passes proper queries/parameters (e.g. https://github.com/playframework/playframework/blob/master/framework/src/anorm/src/test/scala/anorm/ParameterSpec.scala) and handle JDBC results (parsing/mapping) as expected ("takes in all the necessary bits and pieces to extract some data from the database, and returns the data in a DataTable object").
Framework like jOOQ or my framework Acolyte can be used for: https://github.com/cchantep/acolyte .
Perhaps, a good approach would be to test the behaviour of your domain logic that communicates with the DB under the hood.
Without diving into DDD and CQRS, you could check out DbSample on GitHub, a sample EF Core based project with fully automated tests against services that work with MS SQL Server. It also has a GitHub Actions pipeline to run the tests in cloud builds.
An example of the test would be
[Fact]
public async Task Update_Client_Works()
{
// GIVEN a DB with a client
var existingClient = await DataContext.AddAsync(new Client { Name = "Name" });
await DataContext.SaveChangesAsync();
var clientId = existingClient.Id;
// WHEN update name of the client (this domain command executes an SQL query)
await _clientCommandService.Update(clientId, new CreateUpdateClientRequest("XYZ"));
// THEN the name is updated
var client = await DataContext.Clients.FindAsync(clientId);
Assert.Equal("XYZ", client!.Name);
}
For a deeper dive into orchestrating tests, see "Pain & Gain of automated tests against SQL (MS SQL, PostgreSQL)" article. It goes into the woods of "how?" and "why?". Spoiler alert – it relies on Docker a lot.