information on TAP and TDD for 'C' - c

I am precisely looking for the info like ,
TAP is for regression and TDD is for Unit Testing ... or
are they mutually exclusive( no need to use both of them ) ?
bonus for suggesting 'good' Unit Test Frame work for TDD in C (expecting to address good aspect as well :) )
finally cMockery (googles code) for Testing C code (not derived from xUnit Patterns) can be used for TDD ? how ?
added for clarity:
TAP is test anything protocol , you can find documentation in CPAN (perl archive). libtap is TAP for C. http://www.onlamp.com/pub/a/onlamp/2006/01/19/libtap.html?page=1 gives good explanation of TAP in

For unit testing frameworks for C, you can refer to this question.
There is no conflict between regression and unit testing, as the unit tests are used as a safety net to detect undesired changes.
You certainly can use TAP for TDD, there is no contraindication. If you already use Perl Test::More, then sharing the same output format can be helpful.
Why do you ask wheter cMockery can be used for TDD ? DO you think it cannot ? why ?
TDD and unit test frameworks are just means, not ends.

I guess you're referring to this TAP: "Tests and Proofs". TAP is a conference where people talk about TDD and ways to mathematically prove that a program is correct. So the two are not really related (Way to write software vs. a forum where you can talk about this topic).
TDD is used for both unit tests as for regression testing. For details, see this answer.
I haven't used any TDD frameworks for C but googling for "unit testing c" yields a couple of interesting links.

I using CUnitWin32 as my testing framework. The front-page highlights the positives

Related

How to begin writing code for e2e testing in ionic ?

I am new to ionic and decided that i wanted to integrate testing while building my app. But I am a little confused.
Should i write a test after already written functions? or functions that I will write?
Should I test every function ?
I would appreciate if someone just explains the logic flow for me.
Well, you could look into TDD (Test Driven Development) and that should answer your questions.
However, to provide a short answer here as well; you should write a test first, code later (for the why consult the link above). In practice, testing every function may not be viable, but TDD certainly enforces it.
One thing to note, you wrote E2E, but actually what you'll be doing at the start is Unit Testing your functions (E2E comes later).

Can anyone report experiences with HWUT for unit testing?

Can anyone report experiences with that HWUT tool (http://hwut.sourceforge.net)?
Has anyone had experiences for some longer time?
What about robustness?
Are the features like test generation, state machine walking, and Makefile generation useful?
What about execution speed? Any experience in larger projects?
How well does code coverage measurments perform?
I have been using HWUT since several years for software unit testing for larger automotive infotainment projects. It is easy to use, performance is great and it does indeed cover state machine walkers, test generation and make-file generation, too. Code coverage is working well. What I really like about HWUT are the state machine walker and test generation, since they allow to create a large amount test cases in very short time. I highly recommend HWUT.
Much faster than commercial tools, which can save a lot of time for larger projects!
I really like the idea of testing by comparing program output, which makes it easy to start writing tests, and also works well when scaling the project later on. Combined with makefile generation it is really easy to set up a test.
I used HWUT to test multiple software components. It is really straight forward and as a software developer you don't have to click around in GUIs. You can just create a new source code file (*.c or whatever) and your test is nearly done.
This is very handy when using a version control. You just have to check in the "test.c" file, the Makefile and the results of the test - that's it no need to check in binary files.
I like using the generators which HWUT offers. By using them it is easy possible to create ten thousands (or even more) testcases. Which is very handsome if you want to test the border conditions of e.g. a convert function.

Test-Driven Development in CakePHP

I'm using CakePHP 2.3 and would like to know how to properly go about building a CakePHP website using test-driven development (TDD). I've read the official documentation on testing, read Mark Story's Testing CakePHP Controllers the hard way, and watched Mark Story's Win at life with Unit testing (PDF of slides) but am still confused. I should note that I've never been very good about writing tests in any language and don't have a lot of experience with it, and that is likely contributing to my confusion.
I'd like to see a step by step walkthrough with code examples on how to build a CakePHP website using TDD. There are articles on TDD, there are articles on testing with CakePHP, but I have yet to find an in-depth article that is about both. I want something that holds my hand through the whole process. I realize this is somewhat of a tall order because, unless my Google-fu is failing me, I'm pretty sure such an article hasn't yet been published, so I'm basically asking you to write an article (or a long Stack Overflow answer), which takes time. Because this is a tall order, I plan to start a bounty on this question worth a lot of points once I'm able to in order to better reward somebody for their efforts, should anybody be willing to do this. I thank you for your time.
TDD is a bit of falacy in that it's essentially just writing tests before you code to ensure that you are writing tests.
All you need to do is create your tests for a thing before you go create it. This requires thought and analysis of your use cases in order to write tests.
So if you want someone to view data, you'll want to write a test for a controller. It'll probably be something like testViewSingleItem(), you'll probably want to assertContains() some data that you want.
Once this is written, it should fail, then you go write your controller method in order to make the test pass.
That's it. Just rinse and repeat for each use case. This is Unit Testing.
Other tests such as Functional tests and Integration tests are just testing different aspects of your application. It's up to you to think and decide which of these tests are usefull to your project.
Most of the time Unit Testing is the way to go as you can test individual parts of the application. Usually parts which will impact on the functionality the most, the "Critical path".
This is an incredibly useful TDD tutorial. http://net.tutsplus.com/sessions/test-driven-php/

Simplest and cheapest way to start WPF testing

I have a (closed source) WPF application containing mainly two modules: A UI exe and a "Model" dll. One screen (for now) and about 30 classes.
I would like to start testing it with testing tools.
I have resharper.
I don't have time :). I don't want to strat learning about factories, mocking, IOC and so on. And I don't want to disturb the code too much (re IOC etc.)
I don't have alota money. I saw a recommendation here for SmartBear's TestComplete and then I saw its $2K price tag and I balked at the price: at $99 I would weep at pay, and you can't beat free :)
So, my question is: "What is the simplest and cheapest way for me to start WPF testing, not necessarily the best colution but something that will provide some benefit at low cost?"
If you want to go to the free route, you can have a look at System.Windows.Automation namespace : http://msdn.microsoft.com/en-us/library/system.windows.automation.aspx
See this article : http://msdn.microsoft.com/en-us/magazine/dd483216.aspx
A free and interesting approach is Approval Tests: http://approvaltests.sourceforge.net/. You would essentially "approve" your UI and then execute tests against your app. If the resulting UI doesn't match up with the approved version then the test fails. The comparison here is based on images of your UI - this clearly has pros and cons when compared to other testing approaches.
This video is an example of using Approval Tests with WPF: http://www.youtube.com/watch?v=Xc_ty03lZ9U&list=PL0C32F89E8BBB5368&index=17&feature=plpp_video
Probably the easiest way is to concentrate on the 30 classes (non GUI) to test. It seems (but I'm not sure) that most functionality is in those 30 classes.
If designed well, the model (those 30 classes) can be tested reasonably easy.
For GUI testing a lot more effort is normally needed.
So if you want to spend less time, concentrate on testing the model.
For testing the model, what you normally do is: write stubs for external components (if needed), set the input parameters (depending on your app) and check if the 'output' is what you expect.

Determine which Unit Tests to Run Based on Diffs

Does anyone know of a tool that can help determine which unit tests should be run based on the diffs from a commit?
For example, assume a developer commits something that only changes one line of code. Now, assume that I have 1000 unit tests, with code coverage data for each unit test (or maybe just for each test suite). It is unlikely that the developer's one-line change will need to run all 1000 test cases. Instead, maybe only a few of those unit tests actually come into contact with this one-line change. Is there a tool out there that can help determine which test cases are relevant to a developer's code changes?
Thanks!
As far as I understand, the key purpose of unit testing is to cover the entire code base. When you make a small change to one file all test have to be executed to make sure your micro-change doesn't break the product. If you break this principle there is little reason in your unit testing.
ps. I would suggest to split the project onto independent modules/services, and create new "integration unit tests", which will validate interfaces between them. But inside one module/service all unit tests should be executed as "all or nothing".
You could probably use make or similar tools to do this by generating a results file for each test, and making the results file dependent on the source files that it uses (as well as the unit test code).
Our family of Test Coverage tools can tell you which tests exercise which parts of the code, which is the basis for your answer.
They can also tell you which tests need to be re-run, when you re-instrument the code base. In effect, it computes a diff on source files that it has already instrumented, rather than using commit diffs, but it achieves the effect you are looking for, IMHO.
You might try running them with 'prove' which has a 'fresh' option which is based on file modification times. Check the prove manpage for details.
Disclaimer: I'm new to C unit testing and haven't been using prove, but have read about this option in my research.

Resources