Question is as we develop apex code and then we write test class which covers at least 75% code coverage of the apex class,now when I log in to developer console,I am able to see the code coverage which is little off because manually one needs to go to developer console,I want some report which can be shown to senior managers about Salesforce Test code coverage for entire org
but is it possible to get this in a sophisticated report very much similar to sonarqube code coverage report
Thanks in advance
Carolyn
Sonarqube understands Apex so if you have a license already it might be simpler than you think. There are other tools like Clayton (I'm not affiliated wiith either).
If you want to hand-craft something similar...
To get these results in more readable format you can start with a query in DeveloperConsole (query tab, on the bottom tick the checkbox to use Tooling API)
SELECT ApexClassorTrigger.Name, NumLinesCovered, NumLinesUncovered
FROM ApexCodeCoverageAggregate
ORDER BY NumLinesUncovered DESC
LIMIT 10
It should give you a good idea which classes/triggers are least covered. Some of these will be quick wins, time spent on creating/improving their tests will give you best results in overall coverage. I mean it's better to spend 1h fixing class that has 60 out of 100 lines covered than class that has 2 out of 4 covered.
This is a "grand total" result for each file. If you want you can check how much each unit test covers:
SELECT ApexTestClass.Name, TestMethodName, ApexClassOrTrigger.Name, NumLinesUncovered, NumLinesCovered, Coverage
FROM ApexCodeCoverage
If you have developer tools like Salesforce DX commandline installed (sfdx-cli) and Visual Studio Code (vscode) you can do bit more. SFDX will tell you which lines were covered, which weren't: https://salesforce.stackexchange.com/questions/232527/how-do-i-get-apex-code-coverage-statistics-when-using-salesforce-dx-visual-stu
And VSCode lets you install Apex PMD plugin (PMD is free tool similar to Sonarqube). I doubt it'll produce a pretty PDF for management. It's designed to scan as you develop, give you warnings just like Word and Outlook highlight typos and grammar errors.
Last but not least - try running Salesforce Optimizer from setup. I don't think it'll report on coverage but it can tell high level warning signs (Apex code that's old and not touched in a while - maybe you don't need that functionality anymore, maybe there's a built-in that works better, maybe it can be written simpler now, even as a process builder; objects that have more than 1 trigger on them are against best practices etc)
Related
My company hired an agency to create an MQL salesforce object. Its constructed from an Apex class with various triggers.
We no longer have a need for it, and as the standing saleforce admin, there is none at the company who knows Apex. I'm taking classes to learn it, but wanted to check in and see how I can deprecate the object from salesforce by archiving/deleting (or even just commenting out the code) to push the update to production.
Does anyone have insight into how to go about doing this? All of the courses I've taken are basic understanding of Apex and how to write small triggers, classes and queries. The agency who built the class left 0 documentation on its code.
You can't write code in production so whatever you'll try to do - will have to be done in sandbox, tested and then deployed.
There's a way to do a "destructive deployment" and really delete it but you'll need programming tools (VSCode, Eclipse IDE or Ant + Migration Tool). It's bit advanced topic, I'd suggest you hire a dev ;) or try to just comment them out.
In sandbox you can comment out the body (bodies?) of triggers and classes. You shouldn't kill whole file, leave some empty skeletons like
public with sharing class MqlGenerator{
/* kill everything
*/
}
trigger MqlTrigger on MQL__c (after insert){
/* kill everything
*/
}
Of course if there's trigger on Account and it does 10 things, only 2 of them relate to MQL then don't comment everything out ;) It'll be bit of trial and error for you, depends how clean the code is.
You will have to touch triggers, normal classes and likely unit tests too because if they did decent job - there will be tests that verify these triggers do something and now these tests will start to fail.
Add the files to changeset as you go (you do changesets, right? Doesn't sound like you deploy with Git+SFDX for example). From time to time run Apex Classes -> Compile all classes and run unit tests. Some manual testing wouldn't hurt too. If you are unsure what's left you can click on MQL's fields, there's "Where is this used?" button. Or even try clicking delete & repeating until it succeeds ;)
After you deploy this changeset...
If the MQL__c has no triggers (for example it is created in Account updates but itself doesn't have triggers), you might actually be able to delete the object. If there are related triggers, workflows etc SF will stop you. The only way to really delete it would be to run this destructive deploy. It's possible without installing anything, use the link I included and for example workbench would let you make a deployment. But it's bit "pro", if you're unsure start with commenting stuff out and maybe leave the empty skeleton until you're more comfortable. You can always hide the object's Tab, remove right to Read the object and it'll disappear from listviews, reports... it'll be an eyesore only for sysadmins.
If object has to stay around but the data storage is significant you could try truncating the object. If it gives you trouble - Data Loader, export all records (just IDs), then delete. Maybe even with hard delete option so you skip recycle bin.
I am working on a new project where the client's pre-existing production code has a low coverage of 72% thus not allowing me to deploy any work done in the Sandbox.
Error:
Code Coverage Failure
Your code coverage is 72%. You need at least 75% coverage to complete this deployment.
Does anyone have recommendations as to how to increase code coverage?
Compile all classes in production
Run all your unit tests (local ones, no need to run tests that come with managed packages)
Go to Developer Console, Query Editor, tick at the bottom the Tooling API checkbox
Run this query
SELECT ApexClassorTrigger.Name, NumLinesCovered, NumLinesUncovered
FROM ApexCodeCoverageAggregate
ORDER BY NumLinesUncovered DESC
LIMIT 10
It should give you a good idea which classes/triggers are least covered. Some of these will be quick wins, time spent on creating/improving their tests will give you best results in oveall coverage. I mean it's better to spend 1h fixing class that has 60 out of 100 lines covered than class that has 2 out of 4 covered. Work in sandbox till you're > 75%
(there's a chance your sandbox is outdated and somebody created validation rules, required fields etc straight in production without deploying... that's why I asked to compile & run all tests in prod)
If there are classes/methods that aren't used anymore and it'd be safe to delete them - you can't do it with changeset, you need a special destructive deployment. For now you could comment them out and deploy that version. Just check if this is beneficial for you (I mean of course it's good to get rid of old code, easier maintenance... but if it happens to be well covered with tests you'll shoot yourself in the foot)
Add the created/updated test classes to changeset and you should be able to deploy it to prod.
Can anyone report experiences with that HWUT tool (http://hwut.sourceforge.net)?
Has anyone had experiences for some longer time?
What about robustness?
Are the features like test generation, state machine walking, and Makefile generation useful?
What about execution speed? Any experience in larger projects?
How well does code coverage measurments perform?
I have been using HWUT since several years for software unit testing for larger automotive infotainment projects. It is easy to use, performance is great and it does indeed cover state machine walkers, test generation and make-file generation, too. Code coverage is working well. What I really like about HWUT are the state machine walker and test generation, since they allow to create a large amount test cases in very short time. I highly recommend HWUT.
Much faster than commercial tools, which can save a lot of time for larger projects!
I really like the idea of testing by comparing program output, which makes it easy to start writing tests, and also works well when scaling the project later on. Combined with makefile generation it is really easy to set up a test.
I used HWUT to test multiple software components. It is really straight forward and as a software developer you don't have to click around in GUIs. You can just create a new source code file (*.c or whatever) and your test is nearly done.
This is very handy when using a version control. You just have to check in the "test.c" file, the Makefile and the results of the test - that's it no need to check in binary files.
I like using the generators which HWUT offers. By using them it is easy possible to create ten thousands (or even more) testcases. Which is very handsome if you want to test the border conditions of e.g. a convert function.
I want to test some code with CUnit. Does anyone know if it is possible to do a walktrough Analysis?
I want to have something, that says, you`ve tested 80% of your function.
It must be ensured, that 100% coverage are reached with the test.
There are a few tools that will help - the basic free one is gcov, which will do what you need but will involve a certain amount of setup, etc.
There are other (commercial) ones, but what's available changes, including if there are non-commercial free/low-cost licences. Having said that, http://c2.com/cgi/wiki?CodeCoverageTools might be worth a starting point if you need more than gcov.
We are in the process of defining our software development process and wanted to get some feed back from the group about this topic.
Our team is spread out - US, Canada and India - and I would like to put into place some simple standard rules that all teams will apply to their code.
We make use of Clear Case/Quest and RAD
I have been looking at PMD, CPP, checkstyle and FindBugs as a start.
My thought is to just put these into ANT and have the developers run these manually. I realize doing this you have to have some trust in that each developer will do this.
The other thought is to add in some builders in to the IDE which would run a subset of the rules (keep the build process light) and then add another set (heavy) when they check in the code.
Some other ideals is to make use of something like Cruse Control and have it set up to run these static analysis tools along with the unit test when ever Clear Case/Quest is idle.
Wondering if others have done this and if it was successfully or can provide lessons learned.
We have:
ClearCase used with Hudson for any "heavy" static analysis step
Eclipse IDE with the tools you mentioned integrated with a smaller set of rules
Note: we haven't really managed to make replica works with our different user bases (US-Europe-Hong-Kong), and we are using CCRC instead of multi-sites.
ClearCase being mainly used in Europe, the analysis step takes place during the night there (UMT time), and use snapshot views to make sure it goes as quickly as possible (a dynamic view involves too much network traffic when accessing large files).
I'd use hudson to run static analysis on scm changes if your code base is not too large, or on periodic builds if it is.
OK, i can't resist... If you team is spread out, why in the world would you use clearcase? As someone who had to use that, when our company switched to Mercurial the team velocity improved immensely. That multi-site junk is just awful.