Can anyone report experiences with HWUT for unit testing? - c

Can anyone report experiences with that HWUT tool (http://hwut.sourceforge.net)?
Has anyone had experiences for some longer time?
What about robustness?
Are the features like test generation, state machine walking, and Makefile generation useful?
What about execution speed? Any experience in larger projects?
How well does code coverage measurments perform?

I have been using HWUT since several years for software unit testing for larger automotive infotainment projects. It is easy to use, performance is great and it does indeed cover state machine walkers, test generation and make-file generation, too. Code coverage is working well. What I really like about HWUT are the state machine walker and test generation, since they allow to create a large amount test cases in very short time. I highly recommend HWUT.

Much faster than commercial tools, which can save a lot of time for larger projects!

I really like the idea of testing by comparing program output, which makes it easy to start writing tests, and also works well when scaling the project later on. Combined with makefile generation it is really easy to set up a test.

I used HWUT to test multiple software components. It is really straight forward and as a software developer you don't have to click around in GUIs. You can just create a new source code file (*.c or whatever) and your test is nearly done.
This is very handy when using a version control. You just have to check in the "test.c" file, the Makefile and the results of the test - that's it no need to check in binary files.
I like using the generators which HWUT offers. By using them it is easy possible to create ten thousands (or even more) testcases. Which is very handsome if you want to test the border conditions of e.g. a convert function.

Related

Tests with CUNIT - walkthrough analysis / code coverage

I want to test some code with CUnit. Does anyone know if it is possible to do a walktrough Analysis?
I want to have something, that says, you`ve tested 80% of your function.
It must be ensured, that 100% coverage are reached with the test.
There are a few tools that will help - the basic free one is gcov, which will do what you need but will involve a certain amount of setup, etc.
There are other (commercial) ones, but what's available changes, including if there are non-commercial free/low-cost licences. Having said that, http://c2.com/cgi/wiki?CodeCoverageTools might be worth a starting point if you need more than gcov.

Test-Driven Development in CakePHP

I'm using CakePHP 2.3 and would like to know how to properly go about building a CakePHP website using test-driven development (TDD). I've read the official documentation on testing, read Mark Story's Testing CakePHP Controllers the hard way, and watched Mark Story's Win at life with Unit testing (PDF of slides) but am still confused. I should note that I've never been very good about writing tests in any language and don't have a lot of experience with it, and that is likely contributing to my confusion.
I'd like to see a step by step walkthrough with code examples on how to build a CakePHP website using TDD. There are articles on TDD, there are articles on testing with CakePHP, but I have yet to find an in-depth article that is about both. I want something that holds my hand through the whole process. I realize this is somewhat of a tall order because, unless my Google-fu is failing me, I'm pretty sure such an article hasn't yet been published, so I'm basically asking you to write an article (or a long Stack Overflow answer), which takes time. Because this is a tall order, I plan to start a bounty on this question worth a lot of points once I'm able to in order to better reward somebody for their efforts, should anybody be willing to do this. I thank you for your time.
TDD is a bit of falacy in that it's essentially just writing tests before you code to ensure that you are writing tests.
All you need to do is create your tests for a thing before you go create it. This requires thought and analysis of your use cases in order to write tests.
So if you want someone to view data, you'll want to write a test for a controller. It'll probably be something like testViewSingleItem(), you'll probably want to assertContains() some data that you want.
Once this is written, it should fail, then you go write your controller method in order to make the test pass.
That's it. Just rinse and repeat for each use case. This is Unit Testing.
Other tests such as Functional tests and Integration tests are just testing different aspects of your application. It's up to you to think and decide which of these tests are usefull to your project.
Most of the time Unit Testing is the way to go as you can test individual parts of the application. Usually parts which will impact on the functionality the most, the "Critical path".
This is an incredibly useful TDD tutorial. http://net.tutsplus.com/sessions/test-driven-php/

Usage of static analysis tools - with Clear Case/Quest

We are in the process of defining our software development process and wanted to get some feed back from the group about this topic.
Our team is spread out - US, Canada and India - and I would like to put into place some simple standard rules that all teams will apply to their code.
We make use of Clear Case/Quest and RAD
I have been looking at PMD, CPP, checkstyle and FindBugs as a start.
My thought is to just put these into ANT and have the developers run these manually. I realize doing this you have to have some trust in that each developer will do this.
The other thought is to add in some builders in to the IDE which would run a subset of the rules (keep the build process light) and then add another set (heavy) when they check in the code.
Some other ideals is to make use of something like Cruse Control and have it set up to run these static analysis tools along with the unit test when ever Clear Case/Quest is idle.
Wondering if others have done this and if it was successfully or can provide lessons learned.
We have:
ClearCase used with Hudson for any "heavy" static analysis step
Eclipse IDE with the tools you mentioned integrated with a smaller set of rules
Note: we haven't really managed to make replica works with our different user bases (US-Europe-Hong-Kong), and we are using CCRC instead of multi-sites.
ClearCase being mainly used in Europe, the analysis step takes place during the night there (UMT time), and use snapshot views to make sure it goes as quickly as possible (a dynamic view involves too much network traffic when accessing large files).
I'd use hudson to run static analysis on scm changes if your code base is not too large, or on periodic builds if it is.
OK, i can't resist... If you team is spread out, why in the world would you use clearcase? As someone who had to use that, when our company switched to Mercurial the team velocity improved immensely. That multi-site junk is just awful.

Determine which Unit Tests to Run Based on Diffs

Does anyone know of a tool that can help determine which unit tests should be run based on the diffs from a commit?
For example, assume a developer commits something that only changes one line of code. Now, assume that I have 1000 unit tests, with code coverage data for each unit test (or maybe just for each test suite). It is unlikely that the developer's one-line change will need to run all 1000 test cases. Instead, maybe only a few of those unit tests actually come into contact with this one-line change. Is there a tool out there that can help determine which test cases are relevant to a developer's code changes?
Thanks!
As far as I understand, the key purpose of unit testing is to cover the entire code base. When you make a small change to one file all test have to be executed to make sure your micro-change doesn't break the product. If you break this principle there is little reason in your unit testing.
ps. I would suggest to split the project onto independent modules/services, and create new "integration unit tests", which will validate interfaces between them. But inside one module/service all unit tests should be executed as "all or nothing".
You could probably use make or similar tools to do this by generating a results file for each test, and making the results file dependent on the source files that it uses (as well as the unit test code).
Our family of Test Coverage tools can tell you which tests exercise which parts of the code, which is the basis for your answer.
They can also tell you which tests need to be re-run, when you re-instrument the code base. In effect, it computes a diff on source files that it has already instrumented, rather than using commit diffs, but it achieves the effect you are looking for, IMHO.
You might try running them with 'prove' which has a 'fresh' option which is based on file modification times. Check the prove manpage for details.
Disclaimer: I'm new to C unit testing and haven't been using prove, but have read about this option in my research.

Best/standard method for slowing down Silverlight Prism module loading (for testing)

During localhost testing of modular Prism-based Silverlight applications, the XAP modules download too fast to get a feel for the final result. This makes it difficult to see where progress, splash-screens, or other visual states, needs to be shown.
What is the best (or most standard) method for intentionally slowing down the loading of XAP modules and other content in a local development set-up?
I've been adding the occasional timer delay (via a code-based storyboard), but I would prefer something I can place under the hood (in say the Unity loader?) to add a substantial delay to all module loads and in debug builds only.
Suggestions welcomed*
*Note: I have investigated the "large file" option and it is unworkable for large projects (and fails to create XAP with really large files with out of memory error). The solution needs to be code based and preferably integrate behind the scenes to slow down module loading in a local-host environment.
****Note: To clarify, we are specifically seeking an answer compatible with the Microsoft PRISM pattern & PRISM/CAL Libraries.**
Do not add any files to your module projects. This adds unnecessary regression testing to your module since you are changing the layout of the module by extending the non-executable portion. Chances are you won't do this regression testing, and, who knows if it will cause a problem. Best to be paranoid.
Instead, come up with a Delay(int milliseconds) procedure that you pass into a callback that materializes the callback you use to retrieve the remote assembly.
In other words, decouple assembly resource acquisition from assembly resource usage. Between these two phases insert arbitrarily random amounts of wait time. I would also recommend logging the actual time it took remote users to get the assembly, and use that for future test points so that your UI Designers & QA Team have valuable information on how long users are waiting. This will allow you to cheaply mock-up the end-user's experience in your QA environment. Just make sure your log includes relevant details like the size of the assembly requested.
I posed a question on StackOverflow a few weeks ago about something related to this, and had to deal with the question you posed, so I am confident this is the right answer, born from experience, not cleverness.
You could simply add huge files (such as videos) to your module projects. It'll take longer to build such projects, but they'll also be bigger and therefore take longer to download locally. When you move to production, simply remove the huge files.

Resources