Determine which Unit Tests to Run Based on Diffs - c

Does anyone know of a tool that can help determine which unit tests should be run based on the diffs from a commit?
For example, assume a developer commits something that only changes one line of code. Now, assume that I have 1000 unit tests, with code coverage data for each unit test (or maybe just for each test suite). It is unlikely that the developer's one-line change will need to run all 1000 test cases. Instead, maybe only a few of those unit tests actually come into contact with this one-line change. Is there a tool out there that can help determine which test cases are relevant to a developer's code changes?
Thanks!

As far as I understand, the key purpose of unit testing is to cover the entire code base. When you make a small change to one file all test have to be executed to make sure your micro-change doesn't break the product. If you break this principle there is little reason in your unit testing.
ps. I would suggest to split the project onto independent modules/services, and create new "integration unit tests", which will validate interfaces between them. But inside one module/service all unit tests should be executed as "all or nothing".

You could probably use make or similar tools to do this by generating a results file for each test, and making the results file dependent on the source files that it uses (as well as the unit test code).

Our family of Test Coverage tools can tell you which tests exercise which parts of the code, which is the basis for your answer.
They can also tell you which tests need to be re-run, when you re-instrument the code base. In effect, it computes a diff on source files that it has already instrumented, rather than using commit diffs, but it achieves the effect you are looking for, IMHO.

You might try running them with 'prove' which has a 'fresh' option which is based on file modification times. Check the prove manpage for details.
Disclaimer: I'm new to C unit testing and haven't been using prove, but have read about this option in my research.

Related

Can anyone report experiences with HWUT for unit testing?

Can anyone report experiences with that HWUT tool (http://hwut.sourceforge.net)?
Has anyone had experiences for some longer time?
What about robustness?
Are the features like test generation, state machine walking, and Makefile generation useful?
What about execution speed? Any experience in larger projects?
How well does code coverage measurments perform?
I have been using HWUT since several years for software unit testing for larger automotive infotainment projects. It is easy to use, performance is great and it does indeed cover state machine walkers, test generation and make-file generation, too. Code coverage is working well. What I really like about HWUT are the state machine walker and test generation, since they allow to create a large amount test cases in very short time. I highly recommend HWUT.
Much faster than commercial tools, which can save a lot of time for larger projects!
I really like the idea of testing by comparing program output, which makes it easy to start writing tests, and also works well when scaling the project later on. Combined with makefile generation it is really easy to set up a test.
I used HWUT to test multiple software components. It is really straight forward and as a software developer you don't have to click around in GUIs. You can just create a new source code file (*.c or whatever) and your test is nearly done.
This is very handy when using a version control. You just have to check in the "test.c" file, the Makefile and the results of the test - that's it no need to check in binary files.
I like using the generators which HWUT offers. By using them it is easy possible to create ten thousands (or even more) testcases. Which is very handsome if you want to test the border conditions of e.g. a convert function.

Tests with CUNIT - walkthrough analysis / code coverage

I want to test some code with CUnit. Does anyone know if it is possible to do a walktrough Analysis?
I want to have something, that says, you`ve tested 80% of your function.
It must be ensured, that 100% coverage are reached with the test.
There are a few tools that will help - the basic free one is gcov, which will do what you need but will involve a certain amount of setup, etc.
There are other (commercial) ones, but what's available changes, including if there are non-commercial free/low-cost licences. Having said that, http://c2.com/cgi/wiki?CodeCoverageTools might be worth a starting point if you need more than gcov.

How can I run individual tests using Check?

Check docs explains how to selectively run test suites or test cases, but not how to selectively run individual tests. My test cases can contain dozens of tests, so when debugging using printf statements that just produces a mess.
I need to be able to run specific tests, preferably by name, e.g.:
CK_RUN_TEST=check_readbuf_allocation make check
The hypothetical CK_RUN_TEST environment variable would cause only the test function check_readbuf_allocation to be executed.
Each individual test is completely separate from all other tests by design, so I see no reason why this shouldn't be possible.
Is there any way to run individual tests short of creating a whole new executable with all the Check boilerplate code for every test function?

Test-Driven Development in CakePHP

I'm using CakePHP 2.3 and would like to know how to properly go about building a CakePHP website using test-driven development (TDD). I've read the official documentation on testing, read Mark Story's Testing CakePHP Controllers the hard way, and watched Mark Story's Win at life with Unit testing (PDF of slides) but am still confused. I should note that I've never been very good about writing tests in any language and don't have a lot of experience with it, and that is likely contributing to my confusion.
I'd like to see a step by step walkthrough with code examples on how to build a CakePHP website using TDD. There are articles on TDD, there are articles on testing with CakePHP, but I have yet to find an in-depth article that is about both. I want something that holds my hand through the whole process. I realize this is somewhat of a tall order because, unless my Google-fu is failing me, I'm pretty sure such an article hasn't yet been published, so I'm basically asking you to write an article (or a long Stack Overflow answer), which takes time. Because this is a tall order, I plan to start a bounty on this question worth a lot of points once I'm able to in order to better reward somebody for their efforts, should anybody be willing to do this. I thank you for your time.
TDD is a bit of falacy in that it's essentially just writing tests before you code to ensure that you are writing tests.
All you need to do is create your tests for a thing before you go create it. This requires thought and analysis of your use cases in order to write tests.
So if you want someone to view data, you'll want to write a test for a controller. It'll probably be something like testViewSingleItem(), you'll probably want to assertContains() some data that you want.
Once this is written, it should fail, then you go write your controller method in order to make the test pass.
That's it. Just rinse and repeat for each use case. This is Unit Testing.
Other tests such as Functional tests and Integration tests are just testing different aspects of your application. It's up to you to think and decide which of these tests are usefull to your project.
Most of the time Unit Testing is the way to go as you can test individual parts of the application. Usually parts which will impact on the functionality the most, the "Critical path".
This is an incredibly useful TDD tutorial. http://net.tutsplus.com/sessions/test-driven-php/

How to Test Web Code?

Does anyone have some good hints for writing test code for database-backend development where there is a heavy dependency on state?
Specifically, I want to write tests for code that retrieve records from the database, but the answers will depend on the data in the database (which may change over time).
Do people usually make a separate development system with a 'frozen' database so that any given function should always return the exact same result set?
I am quite sure this is not a new issue, so I would be very interested to learn from other people's experience.
Are there good articles out there that discuss this issue of web-based development in general?
I usually write PHP code, but I would expect all of these issues are largely language and framework agnostic.
You should look into DBUnit, or try to find a PHP equivalent (there must be one out there). You can use it to prepare the database with a specific set of data which represents your test data, and thus each test will no longer depend on the database and some existing state. This way, each test is self contained and will not break during further database usage.
Update: A quick google search showed a DB unit extension for PHPUnit.
If you're mostly concerned with data layer testing, you might want to check out this book: xUnit Test Patterns: Refactoring Test Code. I was always unsure about it myself, but this book does a great job to help enumerate the concerns like performance, reproducibility, etc.
I guess it depends what database you're using, but Red Gate (www.red-gate.com) make a tool called SQL Data Generator. This can be configured to fill your database with sensible looking test data. You can also tell it to always use the same seed in its random number generator so your 'random' data is the same every time.
You can then write your unit tests to make use of this reliable, repeatable data.
As for testing the web side of things, I'm currently looking into Selenium (selenium.openqa.org). This appears to be a cross-browser capable test suite which will help you test functionality. However, as with all of these web site test tools, there's no real way to test how well these things look in all of the browsers without casting a human eye over them!
We use an in-memory database (hsql : http://hsqldb.org/). Hibernate (http://www.hibernate.org/) makes it easy for us to point our unit tests at the testing db, with the added bonus that they run as quick as lightning..
I have the exact same problem with my work and I find that the best idea is to have a PHP script to re-create the database and then a separate script where I throw crazy data at it to see if it breaks it.
I have not ever used any Unit testing or suchlike so cannot say if it works or not sorry.
If you can setup the database with a known quantity prior to running the tests and tear down at the end, then you'll know what data you are working with.
Then you can use something like Selenium to easily test from your UI (assuming web-based here, but there are a lot of UI testing tools out there for other UI-flavours) and detect the presence of certain records pulled back from the database.
It's definitely worth setting up either a test version of the database - or make your test scripts populate the database with known data as part of the tests.
You could try http://selenium.openqa.org/ it is more for GUI testing rather than a data layer testing application but does record your actions which then can be played back to automate tests across different platforms.
Here's my strategy (I use JUnit, but I'm sure there's a way to do the equivalent in PHP):
I have a method that runs before all of the Unit Tests for a specific DAO class. It puts the dev database into a known state (adds all test data, etc.). As I run tests, I keep track of any data added to the known state. This data is cleaned up at the end of each test. After all the tests for the class have run, another method removes all the test data in the dev database, leaving it in the state it was in before the tests were run. It's a bit of work to do all this, but I usually write the methods in a DBTestCommon class where all of my DAO test classes can get to them.
I would propose to use three databases. One production database, one development database (filled with some meaningful data for each developer) and one testing database (with empty tables and maybe a few rows that are always needed).
A way to test database code is:
Insert a few rows (using SQL) to initialize state
Run the function that you want to test
Compare expected with actual results. Here you could use your normal unit testing framework
Clean up the rows that were changed (so the next run won't see the previous run)
The cleanup could be done in a standard way (of course, only in the testing database) with DELETE * FROM table.
In general I agree with Peter but for creating and deleting of test data I wouldn't use SQL directly. I prefer to use some CRUD API that is used in product to create data as similar to production as possible...

Resources