Tests with CUNIT - walkthrough analysis / code coverage - c

I want to test some code with CUnit. Does anyone know if it is possible to do a walktrough Analysis?
I want to have something, that says, you`ve tested 80% of your function.
It must be ensured, that 100% coverage are reached with the test.

There are a few tools that will help - the basic free one is gcov, which will do what you need but will involve a certain amount of setup, etc.
There are other (commercial) ones, but what's available changes, including if there are non-commercial free/low-cost licences. Having said that, http://c2.com/cgi/wiki?CodeCoverageTools might be worth a starting point if you need more than gcov.

Related

How To Get Salesforce Code Coverage Report Similar to Sonarqube

Question is as we develop apex code and then we write test class which covers at least 75% code coverage of the apex class,now when I log in to developer console,I am able to see the code coverage which is little off because manually one needs to go to developer console,I want some report which can be shown to senior managers about Salesforce Test code coverage for entire org
but is it possible to get this in a sophisticated report very much similar to sonarqube code coverage report
Thanks in advance
Carolyn
Sonarqube understands Apex so if you have a license already it might be simpler than you think. There are other tools like Clayton (I'm not affiliated wiith either).
If you want to hand-craft something similar...
To get these results in more readable format you can start with a query in DeveloperConsole (query tab, on the bottom tick the checkbox to use Tooling API)
SELECT ApexClassorTrigger.Name, NumLinesCovered, NumLinesUncovered
FROM ApexCodeCoverageAggregate
ORDER BY NumLinesUncovered DESC
LIMIT 10
It should give you a good idea which classes/triggers are least covered. Some of these will be quick wins, time spent on creating/improving their tests will give you best results in overall coverage. I mean it's better to spend 1h fixing class that has 60 out of 100 lines covered than class that has 2 out of 4 covered.
This is a "grand total" result for each file. If you want you can check how much each unit test covers:
SELECT ApexTestClass.Name, TestMethodName, ApexClassOrTrigger.Name, NumLinesUncovered, NumLinesCovered, Coverage
FROM ApexCodeCoverage
If you have developer tools like Salesforce DX commandline installed (sfdx-cli) and Visual Studio Code (vscode) you can do bit more. SFDX will tell you which lines were covered, which weren't: https://salesforce.stackexchange.com/questions/232527/how-do-i-get-apex-code-coverage-statistics-when-using-salesforce-dx-visual-stu
And VSCode lets you install Apex PMD plugin (PMD is free tool similar to Sonarqube). I doubt it'll produce a pretty PDF for management. It's designed to scan as you develop, give you warnings just like Word and Outlook highlight typos and grammar errors.
Last but not least - try running Salesforce Optimizer from setup. I don't think it'll report on coverage but it can tell high level warning signs (Apex code that's old and not touched in a while - maybe you don't need that functionality anymore, maybe there's a built-in that works better, maybe it can be written simpler now, even as a process builder; objects that have more than 1 trigger on them are against best practices etc)

Recreating KaTeX emulation in La/TeX?

I'm working on a site that is using KaTeX for rendering math. However, the interface for entering the math content is (really) not ideal, so it is actually faster for me to work in an editor, like Sublime Text 3 and import the work; however, an issue I run into is that when I import, I discover various functions / environments aren't supported (i.e. emulated) by KaTeX.
If it were just me working on the material, I would simply learn as I go and consult the KaTeX documentation page; however, I have several contractors working on digitizing content who do not have access to the site (and I don't have the ability to give them access), and so cannot learn by trial-and-error. Instead, I end up with piles of documents that all need to be manually adjusted, to render as desired with KaTeX.
As such, I wanted to assemble a preamble for a LaTeX document that would recreate the abilities (i.e. functions and environments) KaTeX can emulate, and was wondering if such a preamble / package already exists? I have tried a few quick searches, but because I'm looking for something that imitates an emulator, I'm finding it tricky to find the right choice of words to get relevant results.
I wasn't sure if this were best posted here or on the TeX.se - I suspect it falls in-between the two - so I apologize if my guess was wrong and I should've tried there first. Any suggestions would be very much appreciated, as this is creating a substantial bottle-neck in my workflow but is also just outside of my ability to solve on my own.
Supported functions is one thing. To tackle that you might actually stand a fair chance of just tokenizing the input, looking for backslash name sequences and checking them against a list extracted from KaTeX sources to see which are supported.
I guess one could even try to remove all other functions from LaTeX. Or rather hide them, such that the user input can't access them but third party libraries can. Getting rid of language features (as opposed to macros) such as \def would probably be even harder. Better askn on the TeX stack exchange for details of you really want to follow this route.
As an alternative I guess you might be able to perform the check I described above in TeX. Write a macro which reads the current file as plain text instead of TeX source, to perform this analysis. Or some such. But a separate stand-alone tool would be much easier.
If you are going for a separate tool, you might as well write it in JavaScript for Node, and have it run KaTeX on the input. That way you can at least tell whether it will get typeset to something or error out.
Whether the rendering is what you expect from LaTeX may be another question. In general KaTeX aims to reproduce LaTeX behaviour, so any difference might indicate a bug. But bugs exist, so all of this might not avoid the need for checks. How about you just processing the math part of the input with KaTeX to some HTML which authors can check without access to the site?
As for existing tools or macro packages, I know of none, but tool or library questions are off topic on stack exchange anyway.

Can anyone report experiences with HWUT for unit testing?

Can anyone report experiences with that HWUT tool (http://hwut.sourceforge.net)?
Has anyone had experiences for some longer time?
What about robustness?
Are the features like test generation, state machine walking, and Makefile generation useful?
What about execution speed? Any experience in larger projects?
How well does code coverage measurments perform?
I have been using HWUT since several years for software unit testing for larger automotive infotainment projects. It is easy to use, performance is great and it does indeed cover state machine walkers, test generation and make-file generation, too. Code coverage is working well. What I really like about HWUT are the state machine walker and test generation, since they allow to create a large amount test cases in very short time. I highly recommend HWUT.
Much faster than commercial tools, which can save a lot of time for larger projects!
I really like the idea of testing by comparing program output, which makes it easy to start writing tests, and also works well when scaling the project later on. Combined with makefile generation it is really easy to set up a test.
I used HWUT to test multiple software components. It is really straight forward and as a software developer you don't have to click around in GUIs. You can just create a new source code file (*.c or whatever) and your test is nearly done.
This is very handy when using a version control. You just have to check in the "test.c" file, the Makefile and the results of the test - that's it no need to check in binary files.
I like using the generators which HWUT offers. By using them it is easy possible to create ten thousands (or even more) testcases. Which is very handsome if you want to test the border conditions of e.g. a convert function.

Test-Driven Development in CakePHP

I'm using CakePHP 2.3 and would like to know how to properly go about building a CakePHP website using test-driven development (TDD). I've read the official documentation on testing, read Mark Story's Testing CakePHP Controllers the hard way, and watched Mark Story's Win at life with Unit testing (PDF of slides) but am still confused. I should note that I've never been very good about writing tests in any language and don't have a lot of experience with it, and that is likely contributing to my confusion.
I'd like to see a step by step walkthrough with code examples on how to build a CakePHP website using TDD. There are articles on TDD, there are articles on testing with CakePHP, but I have yet to find an in-depth article that is about both. I want something that holds my hand through the whole process. I realize this is somewhat of a tall order because, unless my Google-fu is failing me, I'm pretty sure such an article hasn't yet been published, so I'm basically asking you to write an article (or a long Stack Overflow answer), which takes time. Because this is a tall order, I plan to start a bounty on this question worth a lot of points once I'm able to in order to better reward somebody for their efforts, should anybody be willing to do this. I thank you for your time.
TDD is a bit of falacy in that it's essentially just writing tests before you code to ensure that you are writing tests.
All you need to do is create your tests for a thing before you go create it. This requires thought and analysis of your use cases in order to write tests.
So if you want someone to view data, you'll want to write a test for a controller. It'll probably be something like testViewSingleItem(), you'll probably want to assertContains() some data that you want.
Once this is written, it should fail, then you go write your controller method in order to make the test pass.
That's it. Just rinse and repeat for each use case. This is Unit Testing.
Other tests such as Functional tests and Integration tests are just testing different aspects of your application. It's up to you to think and decide which of these tests are usefull to your project.
Most of the time Unit Testing is the way to go as you can test individual parts of the application. Usually parts which will impact on the functionality the most, the "Critical path".
This is an incredibly useful TDD tutorial. http://net.tutsplus.com/sessions/test-driven-php/

Determine which Unit Tests to Run Based on Diffs

Does anyone know of a tool that can help determine which unit tests should be run based on the diffs from a commit?
For example, assume a developer commits something that only changes one line of code. Now, assume that I have 1000 unit tests, with code coverage data for each unit test (or maybe just for each test suite). It is unlikely that the developer's one-line change will need to run all 1000 test cases. Instead, maybe only a few of those unit tests actually come into contact with this one-line change. Is there a tool out there that can help determine which test cases are relevant to a developer's code changes?
Thanks!
As far as I understand, the key purpose of unit testing is to cover the entire code base. When you make a small change to one file all test have to be executed to make sure your micro-change doesn't break the product. If you break this principle there is little reason in your unit testing.
ps. I would suggest to split the project onto independent modules/services, and create new "integration unit tests", which will validate interfaces between them. But inside one module/service all unit tests should be executed as "all or nothing".
You could probably use make or similar tools to do this by generating a results file for each test, and making the results file dependent on the source files that it uses (as well as the unit test code).
Our family of Test Coverage tools can tell you which tests exercise which parts of the code, which is the basis for your answer.
They can also tell you which tests need to be re-run, when you re-instrument the code base. In effect, it computes a diff on source files that it has already instrumented, rather than using commit diffs, but it achieves the effect you are looking for, IMHO.
You might try running them with 'prove' which has a 'fresh' option which is based on file modification times. Check the prove manpage for details.
Disclaimer: I'm new to C unit testing and haven't been using prove, but have read about this option in my research.

Resources