Data mocking for UI tests - angularjs

So we have a web app, and a bunch of E2E tests.
Its all great, except that its a major pain to keep the data in a valid state. We're trying to write the tests in a way that they should leave the data valid, but it is an overhead, and whenever the test fails it will affect a lot of other tests.
So
We've been trying to do a database restore after every test run (we run local dbs for testing) - its a pain
We've been looking at putting db on a virtual machine and making snapshots - licensing costs are high
I was experimenting with interceptors (it is an AngularJS app) that would intercept certain calls to services and return a predefined piece of data - its hard to get it to work properly and creates too much overhead
Its gotta be a very common pain point yet I can't seem to find much about ways to approach this. So how do you solve this?

Related

Testing e2e in Angular, best practice

Probably there are already around some answers to this question but I haven't find the one I was looking for my specific scenario. So, here is my situation: I'm working on a web app made in Angular where all the unit tests are using mock data. Then we have some end to end tests written in Protractor. I'm not very excited about them because we are testing the user interface with the data we get from a live api. I think we're using this approach because we have no control on the back-end but the side effect of this is that the database could change a mess up our tests. Also, the api we're using for the e2e is runnung on an internal network meaning that we cannot run tests outside of the office. I was thinking about mocking the http responses in order to mock the database and being able to run all the tests from anywhere. The problem is that the backend logic could act differently from the one we simulate in our tests meaning that as soon as we deploy the application, it will work in an unexpected way.
What is the best practice and workflow to follow in a similar situation?
Best practice is subjective but there are known solutions each with pros/cons.
Using a shared environment
If you have manual testing on the same environment as your automated tests you will risk someone screwing with your tests. Copying data from production to this environment will also halt your tests and is not good. There is extra effort in making your test idempotent by ensuring the setup is in the correct state that your test expects as well as ensuring the data set up is not conflicting with manual tests. It is recommended when you create an entity during test setup to have it created with some unique token related to the test so that it is unique for that test. This is just hard and costly.
Using separate e2e environment
This is clearly easier on your test idempotence as you have more control of the data and no manual intervention. You can empty the database or reseed it using several solutions (see below) before every test or group of tests. Still you must be careful ensuring tests do not depend on each other or interfere with other tests.
Mock the APIs
You can mock APIs however it is not a true e2e test. Consumer-driven contracts will work if you know that the APIs are testing against specific output and you can then use those outputs as mocks for your inputs of the e2e. These tests are blazing fast. If you don't have control over your environment and its data, or it is a 3rd party system it is recommended to mock the api. You risk not testing the real integration which can cause a lot of failures.
Use APIs to set up test data
This is a pretty good solution as not only does it catch issues with APIs but it keeps your e2e tests focused only on the area being tested and you do not have to set up data using the GUI. Test setup and clean up can be managed this way. It may be quicker than using the GUI to set up and certainly not quicker than mocking the API responses.
Use the GUI to set up the test data
This can work but you must be smart about it. Since you are sharing the environment with manual testing you must ensure the data is in the correct state. It is wise to create separate entities related to your tests and not share any test cases that someone would touch manually testing around. This is slower. This complicates your tests as you spend a majority of your time navigating around and setting up things in the GUI.
Use scripts to load the data directly to the database
Avoid this because there is probably business logic that you are missing and will lead to incorrect states. It is better to go through the API to load data as it can validate the input and run any business logic.
Here are some relevant resources to follow up on:
Martin Fowler's write-up on testing microservices
https://medium.com/how-we-build-fedora/e2e-testing-with-angular-protractor-and-rails-725fbefb8149#.9rziv2gtp
How about getting a test version of the backend deployed that has a limited amout of data in?
That way, after each round of testing has completed the database can then be reset with the original datasets loaded in.
This would ensure consistency in your result across tests, and means if the backend guys make changes to their master branch, it wont affect your tests.

SpecFlow Integration Testing with Database Patterns

I'm attempting to set up SpecFlow for integration/acceptance testing. Our product has a backing database (not a huge one though) in Sqlite.
This is actually proving to be a slightly sticky point though; how do I model the database for the tests?
I would like to know what patterns others out there use for doing integration/acceptance testing with backing databases.
I can think of the following approaches:
Compile a database into the assembly with the tests, then shadow-copy it for each test. Seems slow though.
I could create the database in memory and populate it with pre-determined data.
I could create the database in memory and somehow have Givens populate the database. This seems like it would bloat the tests horribly, but might give them more control and make the tests less fragile.
I could abstract every database interaction and use mocks. Not in love with this idea since I'd like to use this to test the database interactions as well.
Compile the database into the tests and rely on clean-up code to return it to the base state (this one seems dodgy to me). Don't want to do it with transactions since there will be multiple interactions with some tests (so write an item then attempt to read it back with different privileges).
Before considering the How to test, I think you might find it valuable to look at What you want to test.
Starting with what data, I find that it really helps to take a single element, or a small number, and imagine a set of events around them in order to give you the right test data to run your tests with. For example;
If you were working on a healthcare system, you might define a person "Bob" and then produce his life events. Bob was born 37 years ago today, fell off his bike as a child and broke his arm, got married, and has two children.
If you are working on a financial trading system, you might look at a day between opening and closing for a couple of stocks, e.g. "MSFT" and "APPL". On this day you might see one starting low and climbing, the other starting high and falling. A piece of news comes out that reverses their fortunes.
Now you have the what you can actually evaluate which of your scenarios actually work for your data, e.g. “MSFT” and “APPL” could have 1,000s of price changes throughout the day, so generating the Givens and Mocks would be very time consuming. This data lends itself to being pre-captured. On the other hand the “Bob” data works particularly well when using generated data because the data can always change so that it is his birthday today.
One thing your question doesn’t seem to need to consider is updating your data. For example you might want to have a set of tests that work at various stages of your entities life cycle, e.g. Some tests deal with “Baby Bob”, others with “10yr old Bob”,or “Married Bob” etc. If your DB is read only then this isn’t a problem if you can write your tests so that they just don’t see the other data, but sometimes you want build a story through your tests. If your tests do change the data, then you will have problems with ensuring that either your tests run in order (see MSTest OrderedTest or mbUnit DependsOn), or that you can separate your tests so they each deal with an isolated data entity (this is fine if your entity can be described in a single row, but gets harder when you have to read many tables to get it).
You also might want to consider what code you are testing, you can vary the approach inside your different test sets. I currently work on a multi-tier application that has a UI Views, View Models, Client Models, multiple communication systems, and server models. I also have different sets of tests for these. I have some tests that work in a single tier, mocking out other tiers to keep my tests small. Other tests fire up a local server and local client and wire the two up directly. Finally I have some tests that launch a full server process, communicate via EMS and run some simple client side operations using everything but the UI Views.
So now to actually answer your question,
Shadow copy your database - Yes, I’ve done this once with SQLServer Developer and had an xxx.mdb that got copied in before running the tests. However some modern testing frameworks will run tests in parallel e.g. NCrunch, so this just breaks.
Create the database and pre-populate - Not done this one, but my concerns would be what happens where a test changes the database to an unexpected state. Other tests will fail when they have done nothing wrong.
Create the database and use Givens - I’ve done this with NUnit via [SetupFixture]on top of a Linq-to-sql DB.You still have concerns about parallel test runs and you have to balance the granularity of your givens (see StackOverflow-When do BDD scenarios become too specific), and you have the data update ordering/data isolation problem, but this can work really well to allow you to work through your data stories and grow the data throughout your tests. On the other hand, should one test fail and leave the data in a bad state you can end up with lots of failures, but at least you simply need to look at the one that fails first. This kind of testing will also be not play very nicely for developers on their workstation as they can’t just run a single test, particularly with tools such as NCrunch, which can just run tests whose code has changed.
Mock the database This is how I choose to do things now. The trick is that if you are personally following a reasonably strict TDD process where you only test the method you are working on, then you actually end up with some tiers that test the database interaction, e.g. [Test]DALLayerTests.ShouldReadARowAndCreatePOCO(), but most others that used mocked data to test what actually happens e.g. [Test]BusinessObjectPersonTests.ShouldGetBirthdayCongratulations().
Use clean up code - Never tried it, it sounds dodgy :-)

unit testing restful webservices

I am wondering if anyone knows the proper way to unit test a restful webservice. I have a set of webservices built using recess, and I would like to write test code for them. Unfortunately, since my webservices are tied to a database, my tests end up populating the database which seems like a problem.
I am mostly asking about the proper approach to dealing with this from a unit test standpoint. Do I clear the database of the values I have inserted after testing? Do I have a special test database with a whole set of special test routes? I am at a bit of a loss for the best way to approach this.
Obviously in other cases of similar database wrapper classes you would just pass in a dummy database that you set up at the beginning of the tests. This seems like it would be much more challenging though when it comes to working with a restful framework like recess.
I'd appreciate any thought you all might have on the right way to deal with tests saving information to the database.
Thanks in advance.
Generally when testing a web service you are testing the full stack, from the outside in. This means you request a resource and check if the results conform to your expectations.
In nearly all cases populating the database right before every request is a good approach. It might seem like overkill, but in reality with a web service you can't guarantee proper test coverage by mocking/stubbing various elements.
Coming from the Ruby world, Cucumber is the ideal approach as it lets you test from a high level. When you combine this with Rspec to do actual unit testing (lower level tests that query your objects directly) you get the best of both worlds. These libraries even come with something called 'database cleaner' which will manage populating and depopulating the database for you.
You might find the following blog post by Rspec's author very helpful, as it explains brilliantly why you should avoid too much mocking and stubbing. http://blog.davidchelimsky.net/2011/09/22/avoid-stubbing-methods-invoked-by-a-framework/
Generally speaking you have two options:
1) Use a dedicated test database with known data on which you can set your expectations - replace the DB with a "pristine DB" before starting testing. This would be considered integration testing since you are in fact dependent on the database.
2.) Make your code independent of the actual data store and pass in the dependency to the persistence layer. For unit testing you can write (or mock out) a custom persistence layer/object that allows you to observe the state changes that your are unit testing.
A healthy mix of both depending on the scenario usually provides good coverage.
Also instead of testing your Restful web service consider just delegating to a POCO within each service endpoint and then just test these POCOs directly - much easier testable and all you have left to do is verifying the mapping between service endpoint and POCO.
My understanding is that if you do your tests in this order you can test all the verbs but there will be no additional data in the DB at the end.
POST ( add a new record)
GET ( fetch the newly added record)
PUT/PATCH ( modify the newly added record)
DELETE (delete the newly added record)
Of course somebody else using the database at the same time might see transient values during the duration of the test.

When to mock database access

What I've done many times when testing database calls is setup a database, open a transaction and rollback it at the end. I've even used an in-memory sqlite db that I create and destroy around each test. And this works and is relatively quick.
My question is: Should I mock the database calls, should I use the technique above or should I use both - one for unit test, one for integration tests (which, to me at least, seems double work).
the problem is that if you use your technique of setting up a database, opening transactions and rolling back, your unit tests will rely on database service, connections, transactions, network and such. If you mock this out, there is no dependency to other pieces of code in your application and there are no external factors influencing your unit-test results.
The goal of a unit test is to test the smallest testable piece of code without involving other application logic. This cannot be achieved when using your technique IMO.
Making your code testable by abstracting your data layer, is a good practice. It will make your code more robust and easier to maintain. If you implement a repository pattern, mocking out your database calls is fairly easy.
Also unit-test and integration tests serve different needs. Unit tests are to prove that a piece of code is technically working, and to catch corner-cases.
Integration tests verify the interfaces between components against a software design. Unit-tests alone cannot verify the functionality of a piece of software.
HTH
All I have to add to #Stephane's answer is: it depends on how you fit unit testing into your own personal development practices. If you've got end-to-end integration tests involving a real database which you create and tidy up as needed - provided you've covered all the different paths through your code and the various eventualities which could occur with your users hacking post data, etc. - you're covered from a point of view of your tests telling you if your system is working, which is probably the main reason for having tests.
I would guess though that having each of your tests run through every layer of your system makes test-driven development very difficult. Needing every layer in place and working in order for a test to pass pretty much excludes spending a few minutes writing a test, a few minutes making it pass, and repeating. This means your tests can't guide you in terms of how individual components behave and interact; your tests won't force you to make things loosely-coupled, for example. Also, say you add a new feature and something breaks elsewhere; granular tests which run against components in isolation make tracking down what went wrong much easier.
For these reasons I'd say it's worth the "double work" of creating and maintaining both integration and unit tests, with your DAL mocked or stubbed in the latter.

What's the best strategy for unit-testing database-driven applications?

I work with a lot of web applications that are driven by databases of varying complexity on the backend. Typically, there's an ORM layer separate from the business and presentation logic. This makes unit-testing the business logic fairly straightforward; things can be implemented in discrete modules and any data needed for the test can be faked through object mocking.
But testing the ORM and database itself has always been fraught with problems and compromises.
Over the years, I have tried a few strategies, none of which completely satisfied me.
Load a test database with known data. Run tests against the ORM and confirm that the right data comes back. The disadvantage here is that your test DB has to keep up with any schema changes in the application database, and might get out of sync. It also relies on artificial data, and may not expose bugs that occur due to stupid user input. Finally, if the test database is small, it won't reveal inefficiencies like a missing index. (OK, that last one isn't really what unit testing should be used for, but it doesn't hurt.)
Load a copy of the production database and test against that. The problem here is that you may have no idea what's in the production DB at any given time; your tests may need to be rewritten if data changes over time.
Some people have pointed out that both of these strategies rely on specific data, and a unit test should test only functionality. To that end, I've seen suggested:
Use a mock database server, and check only that the ORM is sending the correct queries in response to a given method call.
What strategies have you used for testing database-driven applications, if any? What has worked the best for you?
I've actually used your first approach with quite some success, but in a slightly different ways that I think would solve some of your problems:
Keep the entire schema and scripts for creating it in source control so that anyone can create the current database schema after a check out. In addition, keep sample data in data files that get loaded by part of the build process. As you discover data that causes errors, add it to your sample data to check that errors don't re-emerge.
Use a continuous integration server to build the database schema, load the sample data, and run tests. This is how we keep our test database in sync (rebuilding it at every test run). Though this requires that the CI server have access and ownership of its own dedicated database instance, I say that having our db schema built 3 times a day has dramatically helped find errors that probably would not have been found till just before delivery (if not later). I can't say that I rebuild the schema before every commit. Does anybody? With this approach you won't have to (well maybe we should, but its not a big deal if someone forgets).
For my group, user input is done at the application level (not db) so this is tested via standard unit tests.
Loading Production Database Copy:
This was the approach that was used at my last job. It was a huge pain cause of a couple of issues:
The copy would get out of date from the production version
Changes would be made to the copy's schema and wouldn't get propagated to the production systems. At this point we'd have diverging schemas. Not fun.
Mocking Database Server:
We also do this at my current job. After every commit we execute unit tests against the application code that have mock db accessors injected. Then three times a day we execute the full db build described above. I definitely recommend both approaches.
I'm always running tests against an in-memory DB (HSQLDB or Derby) for these reasons:
It makes you think which data to keep in your test DB and why. Just hauling your production DB into a test system translates to "I have no idea what I'm doing or why and if something breaks, it wasn't me!!" ;)
It makes sure the database can be recreated with little effort in a new place (for example when we need to replicate a bug from production)
It helps enormously with the quality of the DDL files.
The in-memory DB is loaded with fresh data once the tests start and after most tests, I invoke ROLLBACK to keep it stable. ALWAYS keep the data in the test DB stable! If the data changes all the time, you can't test.
The data is loaded from SQL, a template DB or a dump/backup. I prefer dumps if they are in a readable format because I can put them in VCS. If that doesn't work, I use a CSV file or XML. If I have to load enormous amounts of data ... I don't. You never have to load enormous amounts of data :) Not for unit tests. Performance tests are another issue and different rules apply.
I have been asking this question for a long time, but I think there is no silver bullet for that.
What I currently do is mocking the DAO objects and keeping a in memory representation of a good collection of objects that represent interesting cases of data that could live on the database.
The main problem I see with that approach is that you're covering only the code that interacts with your DAO layer, but never testing the DAO itself, and in my experience I see that a lot of errors happen on that layer as well. I also keep a few unit tests that run against the database (for the sake of using TDD or quick testing locally), but those tests are never run on my continuous integration server, since we don't keep a database for that purpose and I think tests that run on CI server should be self-contained.
Another approach I find very interesting, but not always worth since is a little time consuming, is to create the same schema you use for production on an embedded database that just runs within the unit testing.
Even though there's no question this approach improves your coverage, there are a few drawbacks, since you have to be as close as possible to ANSI SQL to make it work both with your current DBMS and the embedded replacement.
No matter what you think is more relevant for your code, there are a few projects out there that may make it easier, like DbUnit.
Even if there are tools that allow you to mock your database in one way or another (e.g. jOOQ's MockConnection, which can be seen in this answer - disclaimer, I work for jOOQ's vendor), I would advise not to mock larger databases with complex queries.
Even if you just want to integration-test your ORM, beware that an ORM issues a very complex series of queries to your database, that may vary in
syntax
complexity
order (!)
Mocking all that to produce sensible dummy data is quite hard, unless you're actually building a little database inside your mock, which interprets the transmitted SQL statements. Having said so, use a well-known integration-test database that you can easily reset with well-known data, against which you can run your integration tests.
I use the first (running the code against a test database). The only substantive issue I see you raising with this approach is the possibilty of schemas getting out of sync, which I deal with by keeping a version number in my database and making all schema changes via a script which applies the changes for each version increment.
I also make all changes (including to the database schema) against my test environment first, so it ends up being the other way around: After all tests pass, apply the schema updates to the production host. I also keep a separate pair of testing vs. application databases on my development system so that I can verify there that the db upgrade works properly before touching the real production box(es).
I'm using the first approach but a bit different that allows to address the problems you mentioned.
Everything that is needed to run tests for DAOs is in source control. It includes schema and scripts to create the DB (docker is very good for this). If the embedded DB can be used - I use it for speed.
The important difference with the other described approaches is that the data that is required for test is not loaded from SQL scripts or XML files. Everything (except some dictionary data that is effectively constant) is created by application using utility functions/classes.
The main purpose is to make data used by test
very close to the test
explicit (using SQL files for data make it very problematic to see what piece of data is used by what test)
isolate tests from the unrelated changes.
It basically means that these utilities allow to declaratively specify only things essential for the test in test itself and omit irrelevant things.
To give some idea of what it means in practice, consider the test for some DAO which works with Comments to Posts written by Authors. In order to test CRUD operations for such DAO some data should be created in the DB. The test would look like:
#Test
public void savedCommentCanBeRead() {
// Builder is needed to declaratively specify the entity with all attributes relevant
// for this specific test
// Missing attributes are generated with reasonable values
// factory's responsibility is to create entity (and all entities required by it
// in our example Author) in the DB
Post post = factory.create(PostBuilder.post());
Comment comment = CommentBuilder.comment().forPost(post).build();
sut.save(comment);
Comment savedComment = sut.get(comment.getId());
// this checks fields that are directly stored
assertThat(saveComment, fieldwiseEqualTo(comment));
// if there are some fields that are generated during save check them separately
assertThat(saveComment.getGeneratedField(), equalTo(expectedValue));
}
This has several advantages over SQL scripts or XML files with test data:
Maintaining the code is much easier (adding a mandatory column for example in some entity that is referenced in many tests, like Author, does not require to change lots of files/records but only a change in builder and/or factory)
The data required by specific test is described in the test itself and not in some other file. This proximity is very important for test comprehensibility.
Rollback vs Commit
I find it more convenient that tests do commit when they are executed. Firstly, some effects (for example DEFERRED CONSTRAINTS) cannot be checked if commit never happens. Secondly, when a test fails the data can be examined in the DB as it is not reverted by the rollback.
Of cause this has a downside that test may produce a broken data and this will lead to the failures in other tests. To deal with this I try to isolate the tests. In the example above every test may create new Author and all other entities are created related to it so collisions are rare. To deal with the remaining invariants that can be potentially broken but cannot be expressed as a DB level constraint I use some programmatic checks for erroneous conditions that may be run after every single test (and they are run in CI but usually switched off locally for performance reasons).
For JDBC based project (directly or indirectly, e.g. JPA, EJB, ...) you can mockup not the entire database (in such case it would be better to use a test db on a real RDBMS), but only mockup at JDBC level.
Advantage is abstraction which comes with that way, as JDBC data (result set, update count, warning, ...) are the same whatever is the backend: your prod db, a test db, or just some mockup data provided for each test case.
With JDBC connection mocked up for each case there is no need to manage test db (cleanup, only one test at time, reload fixtures, ...). Every mockup connection is isolated and there is no need to clean up. Only minimal required fixtures are provided in each test case to mock up JDBC exchange, which help to avoid complexity of managing a whole test db.
Acolyte is my framework which includes a JDBC driver and utility for this kind of mockup: http://acolyte.eu.org .

Resources