I have an API that has a storage layer. It only does the database interactions and perform the CRUD operations. Now I want to test these functions.
In my path API/storage/ , I have different packages having functions to interact with different tables in same database. Tables A, B and C are in same database.
My file hierarchy goes like:
--api
--storage
--A
--A.go
--A_test.go
--B
--C
--server
--A
--testData
--A.sql
--B.sql
In this way I want to test the whole storage layer using command
go test ./...
The approach I was following is that I have a function RefreshTables which first truncates the table, then fills it with a fixed test data that I have kept in testData folder. For truncating I do :
db.Exec("SET FOREIGN_KEY_CHECKS = 0;")
db.Exec("truncate " + table)
db.Exec("SET FOREIGN_KEY_CHECKS = 1;")
As go test runs test functions of different packages in parallel by default, multiple sql connections get created and truncate runs on some other connection while set foreign key runs on some other connection randomly from connection pool.
I am not able to pass my tests if run together but all tests pass if run alone or package wise.
If I do :
go test ./... -p 1
which makes test functions run one by one, all the tests pass.
I have also tried using a transaction for truncate and locking table before truncate.
I checked this article (https://medium.com/kongkow-it-medan/parallel-database-integration-test-on-go-application-8706b150ee2e), and he suggests making different databases in every test function and dropping that database after function ends. I think this will be very time taking.
It would be really helpful if someone suggest the best method for testing database interactions in Golang.
I don't have much experience in integration testing, I'm not sure if mocking the data base drivers could work for you, but if so, I've been using go-sqlmock package for mocking sql database results in unit tests and works like a charm. You could use it and literally have a separated "database engine" for each of your tests. It's a bit time consuming since you have to manually tell the mock what queries to expect and what to return but trust me, it's a good time investment.
As I said before, I'm not sure if using this strategy suits your case, because if you are interested in knowing how your application behaves in a "real database scenario", like verifying that registries are actually saved, then mocking the database results is kind of useless.
Related
I have a test method, using NUnit and Selenium, which opens a browser on our website which is on the Production Server and registers a user and verifies that the registration is successful.
(I know ideally the system tests should run on a separate Test Server rather than production but here they want to test whether the prod system works!)
The problem is how to rollback the database changes as a result of this test? For example, the state of my database before and after running the state should be the same.
I thought of 3 possible options but none is practical:
1) writing SQL queries to delete from the actual tables before starting the test (Setup) and after running the test (TearDown); this is my current approach however
The problem with this approach is that I have to know exactly which tables were involved for each System Test which runs and this can quickly become very complex as a test may impact more than one table.
2) Writing transactional Code
This is not an option since the code changes are done by the website, not by the unit test written.
3) Getting an snapshot of existing database (SQL Server 2008 R2) before each test starts then after the test finished, restoring the snapshot to the original one.
This idea sounds good to me if we could run the tests only on Staging environment but the problem is that the tests have to run on Production and may take like 5 minutes totally so rolling it back and restoring it, would be a stupid idea as the changes done in that 5 minutes would be lost!
Please advise what approach would be best possible option to resolve this problem? there may be a 4th option?
Thanks,
Option 4 never ever ever ever do tests on a production server it's a recipe for disaster (see thousands of funny (if you are not the protagonist) stories on the internet on how this could go horribly wrong), the right thing to do would be to configure the test and production server in the same way.
There is a fith option. If the website receives a registration for user "WeAreTestingOutSite" it does everything except for actually adding the user to the Database.
To be honest, as was said, there are better ways to test if a production site is still in operation than to run bots to register a user to make sure it is working (or operational).
I would recommend you going with 4th option: Introduce new feature which allows to delete the user. Probably not to the user himself/herself but to the system admins (Backoffice users). That way you can test if user can be registered - and deleted afterwards while not caring that much about the SQL scripts.
I want to create tests for all my CRUD's. But how do I set a separate database for them? Is that the best way to go?
This is another question, but it is related: Should I run the tests in the production server too? Sometimes things can go wrong in different enviroments, so I guess I should. But then I need the mentioned separate database, right?
Any advice?
Running any kind of tests on a production server is generally a bad idea (unless it's just the production hardware that hasn't been commisioned yet).
A Unit Test does not hit the database (or any other external system). So, in order to create a unit test you need to remove the dependency on the database.
What you are calling a 'unit test' is probably an integration test. Any test that utilises an external system (such as a database, file system etc.) is an integration test.
Two common solutions to your problem are:
At the start of your test, restore a database backup containing
known data to a separate test database, then perform your tests
against it.
Using a 'fixed' known test database, at the start of each test start
a transaction, perform the test and then rollback the transaction to
leave the database in the same known state.
(No. 1 is often preferable, as the database in (2) can become 'polluted').
I agree with Mitch. I would add that you should decide whether you want to do an integration test or a unit test (or both, but not in the same test). If, in fact, you do want to do a unit test, realize:
Your code has a "dependency" on an external database.
When unit testing you'll have to find a way to "fake" the database. You want to test a "unit" which means a single thing, not two or more things (i.e. your CRUD code AND your connection to a database AND the database itself).
Typically you'll need to refactor your code using something like dependency injection so that when unit testing you can fake things that your code depends on.
Unit testing isn't just testing your code. That's actually the easy part. The harder part is making your code testable.
I recommend going to http://artofunittesting.com/ and watching the free videos on the right side under the heading "Unit Testing Videos". Forget the fact that he's working in .NET as it's the principles that are important.
Then watch the GoogleTechTalks by Misko Hevery where he explains why you want to do dependency injection.
Design Tech Talk Series Presents: OO Design for Testability
The Clean Code Talks -- Unit Testing
(He has more too. There is a series of six GoogleTechTalks.)
I had a similar problem today and I think I've found a good solution.
Make a copy of your database (creating a new empty database works as well).
Edit your config_test.yml to change the database name.
A sample of my test configuration (might be different depending if you have multiple db-s etc)
doctrine:
dbal:
dbname: test_db
Update your database to reflect the entities in your application by calling php app/console doctrine:schema:update --force --env=test (required, if you just created a new db as well as every time you change your application model).
Your application should now use the test database during unit tests. NB! Be sure to make a backup of your database before messing around with the live database.
However, as clearly mentioned before, these are not Unit Tests anymore and instead an integration tests.
Is there anybody out there writing unit tests for their TSQL stored procedures, triggers, functions ... etc.
I've recently started making database and restores and installs part of our automated Cruise Control build process. Now I'm thinking about taking it to the next level where we do the install, then run through a list of stored procedure tests etc.
I was going to just roll my own using MsBuild Extensions to invoke the tests. However I'm aware of http://www.tsqltest.org/ and http://tsqlunit.sourceforge.net/. I'm also aware that TFS has sql testing.
I just wanted to see what people in the real world are doing and if they have any suggestions.
Thanks
The critical parts:
Make it automated and integrated with your build/test (so you have a
green or red from your build)
Make it easy to add a new test
Keep your tests up-to-date
Advanced:
test failure conditions in your code
make sure your tests clean up after
themselves (TSqlTest's example
scripts use #beforeCount and
#afterCount variables to validate the
clean-up)
Stored procedures. I generally include test queries in comments in the SP header, and record correct results and query times. This still leaves it as a manual exercise, however.)
Functions. Again, put SQL statements in the header with the same info.
Triggers. I avoid them for a number of reasons, one of them being that they are so hard to test and debug for so little benefit compared to putting the same logic in another tier. It's like asking how to test for Referential Integrity.
This is still a manual process, however. But since I think one should intentionally design SQL artifacts to be totally uncoupled (e.g. no SPs calling SPs, same with functions, and another strike against triggers IMHO) it's relatively less complex.
I have used the database testing that is built into Visual Studio 2008 Database Edition on a project here. It works well, but feels more like a third party bolt-on to Visual Studio than a native component. Some of the pains I felt with it are:
Because SQL code lives in the res files and a single code file can include multiple tests, it is not as easy to search for tests based on table/column names.
Because multiple tests live in the same code files, you have some annoying variable name collisions (eg, if you have two tests in a single code file, all of the assertions for those tests have to have unique names; That means your assertion names will probably look like "testname_assertionname", which really shouldn't be necessary).
Refactoring your tests is not easy - for example, if you want to move a test from one code file to another, the easiest way is to create the test from scratch in the new file because there are bits and pieces of the test scattered about the res file and the code file.
All of that said, as I started with - It does work well. Unfortunately, we have not added these tests to our continuous integration server yet, so I can't comment on how easy it is to automate the running of these tests. We are using TFS for CI, and I am assuming that automation of the tests would work very similar to automation of standard unit tests; In other words, it seems like there should be an MSTest command line that would run the tests.
Of course, this is only an option if you are licensed to run Visual Studio 2008 DB Edition (which I understand is now included in the VS 2008 Pro license).
I've done this in java, using dbunit.
Basically, anything you do in the database either:
returns a result set
or alters the state of the database.
The state of the database can be described as all the values in all the rows in all the table in all the schemas of a database; the state of any subset is the state of all the data affect by some test.
So, start with a database filled with enough test data that you can perform you tests, call this the baseline. Extract a snapshot, with dbunit or the tool of your choice.
Given that your database is at baseline, any result set is deterministic (as long as your sp is deterministic, less so, if it does a "select random();").
Get the baseline result set of all your SPs, save those as snapshots with dbunit or whatever tool you're using.
To test operations that don't change state, just test that the result set you get is the one you initially got. To test operations that change the database, test that baseline + operation = expected change. After each test that potentially chnages the db, restoe it to baseline.
Basically, the ability to restore to a baseline makes the testing possible.
Have you tried using the red-gate.com API?
They have a bunch of products for comparing things in SQL Server and the API allows virtually the same functionality programmatically.
http://help.red-gate.com/help/SQLDataCompareAPIv5/4/en/GettingStartedAPI.html
In our current database development evironment we have automated build procceses check all the sql code out of svn create database scripts and apply them to the various development/qa databases.
This is all well and good, and is a tremdous improvement over what we did in the past, but we have a problem with rerunning scripts. Obviously this isn't a problem with some scripts like altering procedures, because you can run them over and over without adversly affecting the system. Right now to add metadata and run statements like create/alter table statements we add code to check and see if the objects exists, and if they do, don't run them.
Our problem is that we really only get one shot to run the script, because once the script has been run, the objects are in the environment and system won't run the script again. If something needs to change once it's been deployed, we have a difficult process of running update scripts agaist the update scripts and hoping that everything falls in the correct order and all of the PKs line up between the environments (the databases are, shall we say, "special").
Short of dropping the database and starting the process from scratch (the last most current release), does anyone have a more elegant solution to this?
I'm not sure how best to approach the problem in your specific environment, but I'd suggest reading up on Rail's migrations feature for some inspiration on how to get started.
http://wiki.rubyonrails.org/rails/pages/UnderstandingMigrations
We address this - or at least a similar problem to this - as follows:
The schema has a version number - this is represented by a table which has one row per version which, as well as the version number, carries boring things like a date/time stamp for when that version came into existence.
By having the schema create/modify DDL wrapped in code that performs the changes for us.
In the context above one would build the schema change code as part of the build process then run it and it would only apply schema changes that haven't already been applied.
In our experience (which is bound not to be representative) in most cases the schema changes are sufficiently small/fast that they can safely be run in a transaction which means that if it fails we get a rollback and the db is "safe" - although one would always recommend taking backups before applying schema updates if practicable.
I evolved this out of nasty painful experience. Its not a perfect system (or an original idea) but as a result of working this way we have a high degree of confidence that if there are two instances of one of our databases with the same version that then the schema for those two databases will be the same in almost all respects and that we can safely bring any db up to the current schema for that application without ill effects. (That last isn't 100% true unfortunately - there's always an exception - but its not too far from the truth!)
Do you keep your existing data in the database? If not, you may want to look at something similar to what Matt mentioned for .NET called RikMigrations
http://www.rikware.com/RikMigrations.html
I use that on my projects to update my database on the fly, while keeping track of revisions. Also, it makes it very simple to move database schema to different servers, etc.
if you want to have re-runnability in your scripts, then you can't have them as definitions... what I mean by this is that you need to focus on change scripts rather than here is my Table script.
let's say you have a table Customers:
create table Customers (
id int identity(1,1) primary key,
first_name varchar(255) not null,
last_name varchar(255) not null
)
and later you want to add a status column. Don't modify your original table script, that one has already run (and can have the if(! exists) syntax to prevent it from causing errors while running again).
Instead, have a new script, called add_customer_status.sql
in this script you'll have something like:
alter table Customers
add column status varchar(50) null
update Customers set status = 'Silver' where status is null
alter table Customers
alter column status varchar(50) not null
Again you can wrap this with an if(! exists) block to allow re-running, but here we've leveraged the notion that this is a change script, and we adapt the database accordingly. If there is data already in the customers table then we're still okay, since we add the column, seed it with data, then add the not null constraint.
Both of the migration frameworks mentioned above are good, I've also had excellent experience with MigratorDotNet.
Scott named a couple of other SQL tools that address the problem of change management. But I'm still rolling my own.
I would like to second this question, and add my puzzlement that there is still no free, community-based tool for this problem. Obviously, scripts are not a satisfactory way to maintain a database schema; neither are instances. So, why don't we keep metadata in a separate (and while we're at it, platform-neutral) format?
That's what I'm doing now. My master database schema is a version-controlled XML file, created initially from a simple web service. A simple javascript program compares instances against it, and a simple XSL transform yields the CREATE or ALTER statements. It has limits, like RikMigrations; for instance it doesn't always sequence inter-depdendent objects correctly. (But guess what — neither does Microsoft's SQL Server Database Publication tool.) Really, it's too simple. I simply didn't include objects (roles, users, etc.) that I wasn't using.
So, my view is that this problem is indeed inadequately addressed, and that sooner or later we'll have to get together and tackle the devilish details.
We went the 'drop and recreate the schema' route. We had some classes in our JUnit test package which parameterized the scripts to create all the objects in the schema for the developer executing the code. This allowed all the developers to share one test database and everyone could simultaneously create/test/drop their test tables without conflicts.
Did it take a long time to run? Yes. At first we used the setup method for this which meant the tables were dropped/created for every test and that took way too long. Then we created a TestSuite which could be run once before all the tests for a class and then cleaned up when all the class tests were complete. This still meant that the db setup ran many times when we ran our 'AllTests' class which included all the tests in all our packages. How I solved it was adding a semaphore to the OracleTestSuite code so when the first test requested the database to be setup it would do that but any subsequent call would just increment a counter. As each tearDown() method was called, the counter would decrement the counter until it reached 0 and the OracleTestSuite code would drop everything. One issue this leaves is whether the tests assume that the database is empty. It can be convenient to let database tests know the order in which they run so they can take advantage of the state of the database because it can reduce the duplication of DB setup.
We used the concept of ObjectMothers to solve a similar problem with creating complex domain objects for testing purposes. Mock objects might be a better answer but we hadn't heard about them at the time. After all this time, I'd recommend creating test helper methods that could create standardized datasets for the typical scenarios. Plus that would help document the important edge cases from a data perspective.
Does anyone have some good hints for writing test code for database-backend development where there is a heavy dependency on state?
Specifically, I want to write tests for code that retrieve records from the database, but the answers will depend on the data in the database (which may change over time).
Do people usually make a separate development system with a 'frozen' database so that any given function should always return the exact same result set?
I am quite sure this is not a new issue, so I would be very interested to learn from other people's experience.
Are there good articles out there that discuss this issue of web-based development in general?
I usually write PHP code, but I would expect all of these issues are largely language and framework agnostic.
You should look into DBUnit, or try to find a PHP equivalent (there must be one out there). You can use it to prepare the database with a specific set of data which represents your test data, and thus each test will no longer depend on the database and some existing state. This way, each test is self contained and will not break during further database usage.
Update: A quick google search showed a DB unit extension for PHPUnit.
If you're mostly concerned with data layer testing, you might want to check out this book: xUnit Test Patterns: Refactoring Test Code. I was always unsure about it myself, but this book does a great job to help enumerate the concerns like performance, reproducibility, etc.
I guess it depends what database you're using, but Red Gate (www.red-gate.com) make a tool called SQL Data Generator. This can be configured to fill your database with sensible looking test data. You can also tell it to always use the same seed in its random number generator so your 'random' data is the same every time.
You can then write your unit tests to make use of this reliable, repeatable data.
As for testing the web side of things, I'm currently looking into Selenium (selenium.openqa.org). This appears to be a cross-browser capable test suite which will help you test functionality. However, as with all of these web site test tools, there's no real way to test how well these things look in all of the browsers without casting a human eye over them!
We use an in-memory database (hsql : http://hsqldb.org/). Hibernate (http://www.hibernate.org/) makes it easy for us to point our unit tests at the testing db, with the added bonus that they run as quick as lightning..
I have the exact same problem with my work and I find that the best idea is to have a PHP script to re-create the database and then a separate script where I throw crazy data at it to see if it breaks it.
I have not ever used any Unit testing or suchlike so cannot say if it works or not sorry.
If you can setup the database with a known quantity prior to running the tests and tear down at the end, then you'll know what data you are working with.
Then you can use something like Selenium to easily test from your UI (assuming web-based here, but there are a lot of UI testing tools out there for other UI-flavours) and detect the presence of certain records pulled back from the database.
It's definitely worth setting up either a test version of the database - or make your test scripts populate the database with known data as part of the tests.
You could try http://selenium.openqa.org/ it is more for GUI testing rather than a data layer testing application but does record your actions which then can be played back to automate tests across different platforms.
Here's my strategy (I use JUnit, but I'm sure there's a way to do the equivalent in PHP):
I have a method that runs before all of the Unit Tests for a specific DAO class. It puts the dev database into a known state (adds all test data, etc.). As I run tests, I keep track of any data added to the known state. This data is cleaned up at the end of each test. After all the tests for the class have run, another method removes all the test data in the dev database, leaving it in the state it was in before the tests were run. It's a bit of work to do all this, but I usually write the methods in a DBTestCommon class where all of my DAO test classes can get to them.
I would propose to use three databases. One production database, one development database (filled with some meaningful data for each developer) and one testing database (with empty tables and maybe a few rows that are always needed).
A way to test database code is:
Insert a few rows (using SQL) to initialize state
Run the function that you want to test
Compare expected with actual results. Here you could use your normal unit testing framework
Clean up the rows that were changed (so the next run won't see the previous run)
The cleanup could be done in a standard way (of course, only in the testing database) with DELETE * FROM table.
In general I agree with Peter but for creating and deleting of test data I wouldn't use SQL directly. I prefer to use some CRUD API that is used in product to create data as similar to production as possible...