Why are four of my HTTPCalloutMock Classes counting toward code coverage? - salesforce

I have 7 HTTPCalloutMock classes, with associated test classes that use them.
However, when checking my code coverage I notice that only 4 of them are listed and have 0% covered. I am trying to get 90% and these 4 classes are cramping my style.
I can detect no difference between the classes that get covered and those that do not. Attached is the pertinent code for a class that is not getting covered, despite the class being called (see the TestPardot() testMedthod and note the system asserts that should fail if the mock was not called)

This is because your test class contains methods not marked as test methods. What is your goal? Do you want 100% coverage on this class or for this class to omitted from the code coverage totals?
If you want 100% coverage, remove the #isTest annotation and add logic to only run the code when Test.isRunningTest() == TRUE.
If you want to omit it from the code coverage totals, you will need to re-work it so that it only includes test methods.
The link below show some scenarios, work-arounds, and self-evaluation tools.
https://help.salesforce.com/articleView?id=Why-is-a-Test-class-evaluated-as-part-of-the-Organization-s-Code-Coverage&language=en_US&type=1
You can also try the #testSetup annotation on the mockPardotWSCall method. I don't know how that will affect the code coverage.

Related

In Jest coverage report, how can I find the test that was responsible for testing a specific line of code?

This is a general question, that may apply to any code coverage report.
In my example above, highlighted line was tested over 77x times, but I'm unable to find the test itself which is testing it (I'm working on a very big repo).
What would be the best way to know this? Is there any flag I'm missing to add this info on top?
Thanks.
I don't believe that this is possible. Under the hood, the Istanbul code coverage tool is simply instrumenting your code into something like
cov_2ano4xc2b8().s[26]++;
const container = document.getElementById('container');
cov_2ano4xc2b8().s[27]++;
camera = new THREE.PerspectiveCamera();
cov_2ano4xc2b8().s[28]++;
(It may also be able to use V8's built-in code coverage features, but I expect the behavior for those is similar.)
To get per-test coverage, you could probably maintain a separate set of code coverage statistics for every test, then aggregate all of those when done, but I'm not aware of any existing tool that does this.
Absent that, there are manual alternatives, such as setting a breakpoint on an interesting line and then running the entire test suite in a debugger, or adding a throw new Error("THIS LINE WAS HIT"); at an interesting line then seeing which tests fail.

Protactor: should I put assertions in my PageObject?

I have multiple scenarios in which I'd like to test pretty much the same things.
I am testing a backoffice and I have widgets (like autocomplete search for instance). And I want to make sure that the widget is not broken given that:
I just browse an article page
I saved a part of the article, which reloaded the page
1+2 then I played with some other widgets which have possible side effects
...
My first tought was to add some reusable methods to my WidgetPO (testWidgetStillWorksX ~)
After browsing on the subjet: there's some pros & cons on the subject as said in http://martinfowler.com/bliki/PageObject.html
So how do you handle / where do you put your reusable tests and what are the difficulties/advantages you've had with either methods ?
Your question is too much broad to answer. Best way to write tests using PageObject model is to exclude assertions from the PageObject file. To cut short here's a small explanation -
Difficulties -
Assertions are always a part of the test case/script. So its better to put them in the scripts that you write.
Assertions in PageObject disturbs the modularity and reusability of the code.
Difficulty in writing/extending general functions in pageobject.
Third person would need to go to your pageobject from test script each and everytime to check your assertions.
Advantages -
You can always add methods/functions that do repetitive tasks in your pageObject which performs some operation(like waiting for element to load, getting text of an element, etc...) other than assertions and returns a value.
Call the functions of PageObject from your tests and use the returned value from them to perform assertions in your tests.
Assertions in test scripts are easy to read and understand without a need to worry about the implementation of pageobjects.
Here's a good article of pageobjects. Hope this helps.

In C, should Unit Test be written for Header file or C file

I have a header file which contains declaration of struct and some methods, and a 'C' file which defines(implements) the struct and the methods. Now while writing Unit Test Cases, I need to check if some struct variable(which do not have getter methods) are modified.Since the struct's definition is contained in the C file, should the unit test cases be based on header file or C file ?
Test the interface of the component available to other parts of the system, the header in your case, and not the implementation details of the interface.
Unit tests make assertions about the behavior of a component but shouldn't depend on how that behavior is implemented. The tests describe what the component does, not how it is done. If you change your implementation but preserve the same behavior your tests should still pass.
If instead your tests depend on a specific implementation they will be brittle. Changing the implementation will require changing the tests. Not only is this extra work but it voids the reassurance the tests should offer. If you could run the existing test against the new implementation you would have some confidence that the new implementation has not changed behavior other components might depend on. Once you have to change the test to be able to run it against a new implementation you must carefully consider if you have changed any of the test's expectations in the process.
It may be important to test behavior not accessible using the public interface of this component. Consider that a good hint that this interface may not be well designed. TDD encourages a "test first" approach and this is one of the reasons why. If you start by defining the assertions you want to make about a component's behavior you must then design an interface which exposes that behavior. That is what makes the process "test driven".
If you must write tests after the component they are testing then at least try to use this opportunity to re-evaluate the design and learn from your test. Either this behavior is an implementation detail and not worth testing or else the interface should be updated to expose it (this may be more complicated than just making it public as it should also be safe and reasonable for other components of the system to access this now public attribute).
I would suggest that all testing, other than ephemeral stuff performed by developers during the development process, should be testing the API rather than the internals. That is, after all, the specification you're required to meet.
So, if you maintain stuff that's not visible to the outside world, that itself is not an aspect that needs direct testing. Instead, assuming something wrong there would affect the outside world, it's that effect you should test.
Encapsulation means being free to totally change the underlying implementation without the outside world being affected and you'll find that, if you code your tests based on the external view, no changes will be needed if you make such a change.
So, for example, if your unit is an address book, the things you should be testing are along the lines of:
can we insert an entry?
can we delete it?
can we retrieve it?
can we insert a hundred entries and query them in random order?
It's not things like:
do the structure data fields get created properly?
is the linked list internally consistent?

JUnit 4 PermGen size overflow when running tests in Eclipse and Maven2

I'm doing some unit tests with JUnit, PowerMock and Mockito. I have a lot of test classes annotated with #RunWith(PowerMockRunner.class) and #PrepareForTest(SomeClassesNames) to mock final classes and more than 200 test cases.
Recently I've run into a problem of a PermGen space overflow when I run my entire test suite in Eclipse or Maven2. When I run my test one by one then each of them succeeds.
I did some research about that, however none of the advice helped me (I have increased PermGenSize and MaxPermSize). Recently I've found out that there is one class that contains only static methods and each method returns object mocked with PowerMockito. I'm wondering whether it is a good practice and maybe this is the origin of the problem because static variables are being shared between unit tests?
Generally speaking is it a good practice to have a static class with a lot of static methods which returns static mocked objects?
I am getting PermGen errors from Junit in Eclipse too. But I am not using any mocking libs like Mockito nor EasyMock. However, my code base is large and my Junit tests are using Spring-Test (and are intense and complex test cases). For this, I need to truly increase the PermGen for all of my Junit tests.
Eclipse applies the Installed JRE settings to the Junit runs - not the eclipse.ini settings. So to change those:
Window > Preferences > Java > Installed JRE's
select the default JRE, Edit... button
add to Default VM Arguments: -XX:MaxPermSize=196m
This setting will allow Junit tests to run the more intense TestCases in Eclipse, and avoid the OutOfMemoryError: PermGen. This should also be low risk because most simple Junit tests will not allocate all of that memory.
As #Brice says, the problems with PermGen will be coming from your extensive use of mocked objects. Powermock and Mockito both create a new class which sits between the class being mocked and your test code. This class is created at runtime and loaded into PermGen, and is (practically) never recovered. Hence your problems with the PermGen space.
To your question:
1) Sharing of static variables is considered a code smell. It's necessary in some cases, but it introduces depdendencies between tests. Test A needs to run before test B.
2) Usage of static methods to return a mocked object isn't really a code smell, it's a attern which is often used. If you really can't increase your permgen space, you have a number of options:
Use a pool of mocks, with PowerMock#reset() when the mock is put back into the pool. This would cut down on the number of creations you're doing.
Secondly, you said that your classes are final. If this is changeable, then you could just use an anonymous class in the test. This again cuts down on the amount of permgen space used:
Foo myMockObject = new Foo() {
public int getBar() { throw new Exception(); }
}
Thirdly, you can introduce an interface (use Refactor->Extract Interface in Eclipse), which you then extend with an empty class which does nothing. Then, in your class, you do similar to the above. I use this technique quite a lot, because I find it easier to read:
public interface Foo {
public int getBar();
}
public class MockFoo implements Foo {
public int getBar() { return 0; }
}
then in the class:
Foo myMockObject = new MockFoo() {
public int getBar() { throw new Exception(); }
}
I have to admit I'm not a particular fan of mocking, I use it only when necessary, I tend to either extend the class with an anonymous class or create a real MockXXX class. For more information on this point of view, see Mocking Mocking and Testing Outcomes. by Uncle Bob
By the way, in maven surefire, you can always forkMode=always which will fork the jvm for each test class. This won't solve your Eclipse problem though.
First : Mockito is using CGLIB to create mocks, and PowerMock is using Javassist for some other stuff, like removing the final markers, Powermock also loads classes in a new ClassLoader. CGLIB is known for eating the Permanent Generation (just google CGLIB PermGen to find relevant results on the matter).
It's not a straight answer as it depends on details of your project :
As you pointed there is static helper class, I don't know if holds static variables with mocks as well, I don't know the details of your code, so this is pure guess, and other readers that actually knows better might correct me.
It could be the ClassLoader (and at least some of his childrens) that loaded this static class might be kept alive across tests - it might be because of statics (which lives in the Class realm) or because of some reference somewhere - that means that if the ClassLoader still lives (i.e. not garbage collected) his loaded classes are not discarded i.e. the classes including the generated ones are still in the PermGen.
These classes might also be huge in size, if you have a lot of these classes to be loaded this might be relevant to have higher PermGen values, especially since Powermock needs to reload classes in a new Classloader for each tests.
Again I don't know the details of your project, so I'm just guessing, but your permanent generation issue might be caused either due to point 1 or point 2, or even both.
Anyway generally speaking I would say yes : having a static class that might return static mocked object does look like a bad practice here, as it usually is in production code. If badly crafted it can leads to ClassLoader's leak (this is nasty!).
In practice I've seen running hundreds of tests (with Mockito only) without ever changing memory parameters and without seeing the CGLIB proxies being unloaded, and I'm not using static stuff appart the ones from the Mockito API.
If you are using a Sun/Oracle JVM you can try these options to track what's happening :
-XX:+TraceClassLoading and -XX:+TraceClassUnloading or -verbose:class
Hope that helps.
Outside the scope of this question :
Personnaly I don't like using to use Powermock anyway, I only use it in corner cases e.g. for testing unmodifiable legacy code. Powermock is too intrusive imho, it has to spawn for each test a new classloader to perform its deeds (modifying the bytecode), you have to heavily annotate the test classes to be able to mock, ...
In my opinion for usual development all these little inconvenience outweight the benefit of the hability to mock finals. Even Johan the author of Powermock, once told me he was recommanding Mockito instead and keeping Powermock for some specific purpose.
Don't get me wrong here: Powermock is a fantastic piece of technology, that really help when you have to deal with (poorly) designed legacy code that you cannot change. But not for the every day developpement, especially if praticing TDD.

xdebug code coverage analysis with simpletest framework

I am doing unit testing with simpletest framework and using xdebug for code coverage reports. let me explain you my problem:
I have a class which I want to test lets assume name of class is pagination.php.
I write another class for testing. I wrote two test cases to test pagination class.
there are around 12 assertion in two test cases which giving me correct result "Pass".
Now I want to generate code coverage report, for this I use xdebug to show that my test cases covering all code or not. I use xdebug_start_code_coverage() function and for showing result I use xdebug_get_code_coverage() function.
Now the problem is that, when I print xdebug_get_code_coverage() Its give me 2 dimension assosiative array with filename, line no and execution times. the result is like this:
array
'path/to/file/pagination.php' =>
array
11 => int 1
113 => int 1
line 11 is start of class and line 113 is end of class. I don't know why it is not going inside class and why it is not giving the statement coverage for class functions. However, my test cases looks ok to me and I know all condition and branching covering are working.
I will really appreciate if you help me in this regard and guide me how to solve this problem.
Maybe I missed something here. If you want something more please let me know.
I implemented an XDebug-CC for a class with invoked methods and it works fine. Though I have to say that I am a bit confused about how this tool defines "executable code", it definitely takes account of methods.
You might check the location of your xdebug_start_code_coverage() and xdebug_get_code_coverage(), as those have to be invoked at the very beginning and the very end.
Also you might check your XDebug-version, as there has been some accuracy-improvements since the feature has been released.
Best
Raffael
SimpleTest has a coverage extension that is fairly easy to setup. IIRC it is only in svn and not the normal packaged downloads. (normally in simpletest/extensions/coverage/)
You can see articles for examples of how to implement it:
http://www.acquia.com/blog/calculating-test-coverage
http://drupal.org/node/1208382

Resources