I am doing unit testing with simpletest framework and using xdebug for code coverage reports. let me explain you my problem:
I have a class which I want to test lets assume name of class is pagination.php.
I write another class for testing. I wrote two test cases to test pagination class.
there are around 12 assertion in two test cases which giving me correct result "Pass".
Now I want to generate code coverage report, for this I use xdebug to show that my test cases covering all code or not. I use xdebug_start_code_coverage() function and for showing result I use xdebug_get_code_coverage() function.
Now the problem is that, when I print xdebug_get_code_coverage() Its give me 2 dimension assosiative array with filename, line no and execution times. the result is like this:
array
'path/to/file/pagination.php' =>
array
11 => int 1
113 => int 1
line 11 is start of class and line 113 is end of class. I don't know why it is not going inside class and why it is not giving the statement coverage for class functions. However, my test cases looks ok to me and I know all condition and branching covering are working.
I will really appreciate if you help me in this regard and guide me how to solve this problem.
Maybe I missed something here. If you want something more please let me know.
I implemented an XDebug-CC for a class with invoked methods and it works fine. Though I have to say that I am a bit confused about how this tool defines "executable code", it definitely takes account of methods.
You might check the location of your xdebug_start_code_coverage() and xdebug_get_code_coverage(), as those have to be invoked at the very beginning and the very end.
Also you might check your XDebug-version, as there has been some accuracy-improvements since the feature has been released.
Best
Raffael
SimpleTest has a coverage extension that is fairly easy to setup. IIRC it is only in svn and not the normal packaged downloads. (normally in simpletest/extensions/coverage/)
You can see articles for examples of how to implement it:
http://www.acquia.com/blog/calculating-test-coverage
http://drupal.org/node/1208382
Related
This is a general question, that may apply to any code coverage report.
In my example above, highlighted line was tested over 77x times, but I'm unable to find the test itself which is testing it (I'm working on a very big repo).
What would be the best way to know this? Is there any flag I'm missing to add this info on top?
Thanks.
I don't believe that this is possible. Under the hood, the Istanbul code coverage tool is simply instrumenting your code into something like
cov_2ano4xc2b8().s[26]++;
const container = document.getElementById('container');
cov_2ano4xc2b8().s[27]++;
camera = new THREE.PerspectiveCamera();
cov_2ano4xc2b8().s[28]++;
(It may also be able to use V8's built-in code coverage features, but I expect the behavior for those is similar.)
To get per-test coverage, you could probably maintain a separate set of code coverage statistics for every test, then aggregate all of those when done, but I'm not aware of any existing tool that does this.
Absent that, there are manual alternatives, such as setting a breakpoint on an interesting line and then running the entire test suite in a debugger, or adding a throw new Error("THIS LINE WAS HIT"); at an interesting line then seeing which tests fail.
I am new to Salesforce apex coding. My first class that I am developing has 10 methods and is some 800 lines.
I haven’t added much of exception handling, so the size should swell further.
I am wondering, what the best practice for Apex code is... should I create 10 classes with 1 method instead of letting 1 class with 10 methods.
Any help on this would be greatly appreciated.
Thanks
Argee
What do you use for coding? Try to move away from Developer Console. VSCode has some decent plugins like Prettier or Apex PMD that should help you with formatting and making methods too complex. ~80 lines/method is so-so. I'd worry about passing long lists of parameters and having deeply nested code in functions rather than just their length.
There are general guidelines (from other languages, there's nothing special about Apex!) that ideally function should fit on 1 screen so programmer can see it whole without scrolling. Read this one, maybe it'll resonate with you: https://dzone.com/articles/rule-30-%E2%80%93-when-method-class-or
I wouldn't split it into separate files just for sake of it, unless you can clearly define some "separation of concerns". Say 1 trigger per object, 1 trigger handler class (ideally derived from base class). Chunkier bits not in the handler but maybe in some "service" style class that has public static methods and can operate whether called from trigger, visualforce, lightning web component, maybe some one-off data fix would need these, maybe in future you'd need to expose part of it as REST service. And separate file for unit tests (as blasphemous as it sounds - try to not write too many comments. As you're learning you'll need comments to remind yourself what built-in methods do but naming your functions right can help a lot. And a well-written unit test is better at demonstrating the idea behind the code, sample usage and expected errors than comments that can be often overlooked).
Exception handling is an art. Sometimes it's good to just let it throw an exception. If you have a method that creates Account, Contact and Opportunity and say Opportunity fails on validation rule - what should happen? Only you will know what's good. Exception will mean the whole thing gets rolled back (no "widow" Accounts) which sucks but it's probably "more stable" state for your application. If you naively try-catch it without Database.rollback() - how will you tell user to not create duplicates with 2nd click. So maybe you don't need too much error handling ;)
I am trying to implement some quantum error correcting codes in Liquid (please correct the tag if need be), and I thought I'd start by reproducing the Steane7 class discussed in the User's Manual here starting on page 55 (page 56 of the pdf). I have a couple of questions about the provided code though.
The overall structure of the file is unclear to me. The example starts out by defining "type Steane7". This is a class definition, so I assume all of the following code is indented under this? On page 58 (59), it makes a reference to going back to the class definition to add the overrides, which makes it seem like the above code is not indented under the type. I assume this is meant to me that it's indented under the type but not under the synd method?
In the previous mentioned overrides on page 58 (59), what is s and where does it come from? In F#, one can use words other than "this" and "self". Is that what s is supposed to be here, or is s referring to a value previously defined but not mentioned?
The drawing instructions in the prep gate on page 56 (57) say "Error! Hyperlink reference not valid." What are the proper drawing instructions here? I'm guessing that's supposed to read "\multigate{#%d}{%s}"?
The method "fix" has an else with no if on page 58. What's the proper reference to the parent here?
Are there any pieces of the Steane7 class missing from the User Manual? If I call this in a script, it's going to work exactly like the compiled version of the code?
For future codes I implement, are there any other methods which should be overridden? I am piecing together the QECC class by inspecting the compiled assembly through VS.
Frankly, all of these questions could be answered by someone simply pointing me to the source code for QECC and Steane7. The "source" folder I grabbed from the GitHub only has precompiled executables.
I just posted the source for Steane7 to the Liquid GitHub repo. I hope that helps!
I have 7 HTTPCalloutMock classes, with associated test classes that use them.
However, when checking my code coverage I notice that only 4 of them are listed and have 0% covered. I am trying to get 90% and these 4 classes are cramping my style.
I can detect no difference between the classes that get covered and those that do not. Attached is the pertinent code for a class that is not getting covered, despite the class being called (see the TestPardot() testMedthod and note the system asserts that should fail if the mock was not called)
This is because your test class contains methods not marked as test methods. What is your goal? Do you want 100% coverage on this class or for this class to omitted from the code coverage totals?
If you want 100% coverage, remove the #isTest annotation and add logic to only run the code when Test.isRunningTest() == TRUE.
If you want to omit it from the code coverage totals, you will need to re-work it so that it only includes test methods.
The link below show some scenarios, work-arounds, and self-evaluation tools.
https://help.salesforce.com/articleView?id=Why-is-a-Test-class-evaluated-as-part-of-the-Organization-s-Code-Coverage&language=en_US&type=1
You can also try the #testSetup annotation on the mockPardotWSCall method. I don't know how that will affect the code coverage.
I am trying to run a simple test on multiple browsers, here is a mock up of the code I've got:
String url = "http://www.anyURL.com";
WebDriver[] drivers = { new FireFoxDriver(), new InternetExplorerDriver,
newChromDriver() };
#Test
public void testTitle() {
for (int i = 0; i < drivers.length; i++) {
// navigate to the desired url
drivers[i].get(url);
// assert that the page title starts with foo
assertTrue(drivers[i].getTitle().startsWith("foo"));
// close current browser session
drivers[i].quit();
}// end for
}// end test
For some reason this code is opening multiple browsers seemingly before the first iteration of loop is completed.
What is actually happening here? and what is a good/better way to do this?
Please understand that I am by no means a professional programmer, and I am also brand new to using Selenium, so if what I am attempting is generally bad practice please let me know, but please don't be rude about it. I will respect your opinion much more if you are respectful in your answers.
No it's not.
In fact, most of the test frameworks have convenient ways to handle sequential/parallel executions of test. You can parametrize test class to run the same tests on multiple browsers. There is an attribute in TestNG called Parameters which can be used with setting.xml for cross browser testing without duplicating the code. An example shown here
I would no do that.
Most of the time it is pointless to immediately run your test against multiple browsers. Most of the problems you run into as you are developing new code or changing old code is not due to browser incompatibilities. Sure, these happens, but most of the time a test will fail because, well, your logic is wrong, and it will not just fail on one browser but on all of them. What do you gain from getting told X times rather than just once that your code is buggy? You've just wasted your time. I typically get the code working on Chrome and then run it against the other browsers.
(By the way, I run my tests against about 10 different combinations of OS, browser and browser version. 3 combinations is definitely not good enough for good coverage. IE 11 does not behave the same as IE 10, for instance. I know from experience.)
Moreover, the interleaving of tests from multiple browsers just seems generally confusing to me. I like one test report to cover only one configuration (OS, browser, browser version) so that I know if there are any problems exactly which configuration is problematic without having to untangle what failed on which browser.