EasyMock vs Mockito: design vs maintainability? [closed] - easymock

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
One way of thinking about this is: if we care about the design of the code then EasyMock is the better choice as it gives feedback to you by its concept of expectations.
If we care about the maintainability of tests (easier to read, write and having less brittle tests which are not affected much by change), then Mockito seems a better choice.
My questions are:
If you have used EasyMock in large scale projects, do you find that your tests are harder to maintain?
What are the limitations of Mockito (other than endo testing)?

I won't argue about test readability, size or testing techniques of these frameworks, I believe they are equal, but on a simple example I'll show you the difference.
Given: We have a class which is responsible for storing something somewhere:
public class Service {
public static final String PATH = "path";
public static final String NAME = "name";
public static final String CONTENT = "content";
private FileDao dao;
public void doSomething() {
dao.store(PATH, NAME, IOUtils.toInputStream(CONTENT));
}
public void setDao(FileDao dao) {
this.dao = dao;
}
}
and we want to test it:
Mockito:
public class ServiceMockitoTest {
private Service service;
#Mock
private FileDao dao;
#Before
public void setUp() {
MockitoAnnotations.initMocks(this);
service = new Service();
service.setDao(dao);
}
#Test
public void testDoSomething() throws Exception {
// given
// when
service.doSomething();
// then
ArgumentCaptor<InputStream> captor = ArgumentCaptor.forClass(InputStream.class);
Mockito.verify(dao, times(1)).store(eq(Service.PATH), eq(Service.NAME), captor.capture());
assertThat(Service.CONTENT, is(IOUtils.toString(captor.getValue())));
}
}
EasyMock:
public class ServiceEasyMockTest {
private Service service;
private FileDao dao;
#Before
public void setUp() {
dao = EasyMock.createNiceMock(FileDao.class);
service = new Service();
service.setDao(dao);
}
#Test
public void testDoSomething() throws Exception {
// given
Capture<InputStream> captured = new Capture<InputStream>();
dao.store(eq(Service.PATH), eq(Service.NAME), capture(captured));
replay(dao);
// when
service.doSomething();
// then
assertThat(Service.CONTENT, is(IOUtils.toString(captured.getValue())));
verify(dao);
}
}
As you can see both test are fairly the same and both of them are passing.
Now, let’s imagine that somebody else changed Service implementation and trying to run tests.
New Service implementation:
dao.store(PATH + separator, NAME, IOUtils.toInputStream(CONTENT));
separator was added at the end of PATH constant
How the tests results will look like right now ? First of all both tests will fail, but with different error messages:
EasyMock:
java.lang.AssertionError: Nothing captured yet
at org.easymock.Capture.getValue(Capture.java:78)
at ServiceEasyMockTest.testDoSomething(ServiceEasyMockTest.java:36)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
Mockito:
Argument(s) are different! Wanted:
dao.store(
"path",
"name",
<Capturing argument>
);
-> at ServiceMockitoTest.testDoSomething(ServiceMockitoTest.java:34)
Actual invocation has different arguments:
dao.store(
"path\",
"name",
java.io.ByteArrayInputStream#1c99159
);
-> at Service.doSomething(Service.java:13)
What happened in EasyMock test, why result wasn't captured ? Is store method wasn't executed, but wait a minute, it was, why EasyMock lies to us?
It's because EasyMock mixing two responsibilities in a single line - stubbing and verification. That's why when something is wrong it's hard to understand which part is causing failure.
Of course you can tell me - just change the test and move verify before assertion. Wow, are you serious, developers should keep in mind some magic order inforced by mocking framework?
By the way, it won’t help:
java.lang.AssertionError:
Expectation failure on verify:
store("path", "name", capture(Nothing captured yet)): expected: 1, actual: 0
at org.easymock.internal.MocksControl.verify(MocksControl.java:111)
at org.easymock.classextension.EasyMock.verify(EasyMock.java:211)
Still, it is saying to me that method was not executed, but it was, only with another parameters.
Why Mockito is better ? This framework doesn't mix two responsibilities in a single place and when your tests will fail, you will easily understand why.

if we care about the Design of the code then Easymock is the better choice as it gives feedback to you by its concept of expectations
Interesting. I found that 'concept of expectations' makes many devs put more & more expectations in the tests only to satisfy UnexpectedMethodCall problem. How does it influence the design?
The test should not break when you change code. The test should break when the feature stops working. If one likes the tests to break when any code change happens I suggest to write a test that asserts the md5 checksum of the java file :)

I'm an EasyMock developer so a bit partial but of course I've used EasyMock on large scale projects.
My opinion is that EasyMock tests will indeed breaks once in a while. EasyMock forces you to do a complete recording of what you expect. This requires some discipline. You should really record what is expected not what the tested method currently needs. For instance, if it doesn't matter how many time a method is called on a mock, don't be afraid of using andStubReturn. Also, if you don't care about a parameter, use anyObject() and so on. Thinking in TDD can help on that.
My analyze is that EasyMock tests will break more often but Mockito ones won't when you would want them to. I prefer my tests to break. At least I'm aware of what was the impacts of my development. This is of course, my personal point of view.

I don't think you should be too concerned about this. Both Easymock and Mockito can be configured to be 'strict' or 'nice' the only difference is that by default Easymock is strict wheras Mockito is nice.
As with all testing there's no hard and fast rule, you need to balance test confidence against maintainability. I typically find there are certain functional or technical areas that demand a high level of confidence for which I would use 'strict' mocks. For example we probably wouldn't want the debitAccount() method to be called more than once! However there are other cases in which the mock is really little more than a stub so we can test the real 'meat' of the code.
In the early days of Mockito's life API compatibility was a problem but more tools now support the framework. Powermock (a personal favorite) now has a mockito extension

I prefer mockito to be honest. been using EasyMock with unitils and the combination of both oftenly results in exceptions like IllegalArgumentException: not an interface as well as MissingBehaviorExceptions. In both cases though the code and test code are perfectly fine. It appeared that the MissingBehaviorException was due to the fact that mocked objects created with createMock (using classextentions!!) did produce this error. When using #Mock it did work! I do not like that kind of misleading behavior and for me that is a clear indication the developers of it do not know what they are doing. A good framework should always be easy to use and not ambiguous. The IllegalArgumentException was also due to some mingle of EasyMock internals. Also, the recording is not what I want to do. I want to test if my code throws exceptions or not and that it returns the expected results. That in combination with code coverage is the right tool for me. I do not want my tests to break whenever I put 1 line of code above or below the previous one because that improves performance or so. With mockito it is no problem. With EasyMock, it will result tests to fail even though the code is not broken. That is bad. It costs time, thus money. You want to test for expected behavior. Do you really care about the order of things? I suppose in rare occasions you might. Use Easymock then. In other case, I think you'll spend considerably less time using mockito to write your tests.
Kind regards
Lawrence

Related

Using "using" statements for every object implementing IDisposable?

I'm currently skimming through some code that reads Active Directory entries and manipulates them. Since I haven't had to do with this kind of stuff, I F12'd the classes (DirectoryEntry, SearchResultCollection, ...), and I found out they all implement the IDisposable interface, but I couldn't see any using blocks in our code.
Are those even necessary in this case (i.e., should I blindly refactor them in)?
Another question of mine regarding this (there are very many instantiated IDisposable objects in the code: Isn't IDisposable making stuff very "ugly" in this case? I mean, I like using statements as they basically free my mind from worrying about things, but in many cases the code has a layout similar to the following:
using (var one = myObject.GetSomeDisposableObject())
using (var two = myObject.GetSomeOtherDisposableObject())
{
one.DoSomething();
using (var foo = new DisposableFoo())
{
MyMethod(foo);
using (...)
using (...)
{
...
}
}
}
I feel that this becomes quite unreadable due to high indentation levels (even stacking the using statements). But extracting some of this into new methods can lead to many parameters that need to be passed, since naturally the "inner" code often needs the objects created in the using statements.
What is an elegant way to solve this without losing readability?
For the first part, this question refers to 'memory used by the task increasing constantly' when not disposing of AD references
For the second, a using block is syntactic sugar for a try/finally with the Dispose call in the finally block, which would be an alternative construct allowing you to dispose of everything in one place, reducing indentation

Which of these functions is more testable in C?

I write code in C. I have been striving to write more testable code but I am a little
confused on deciding between writing pure functions that are really good for testing
but require smaller functions and hurt readability in my opinion and writing functions
that do modify some internal state.
For example (all state variables are declared static and hence are "private" to my module):
Which of this is more testable in your opinion:
int outer_API_bar()
{
// Modify internal state
internal_foo()
}
int internal_foo()
{
// Do stuff
if (internal_state_variable)
{
// Do some more stuff
internal_state_variable = false;
}
}
OR
int outer_API_bar()
{
// Modify internal state
internal_foo(internal_state_variable)
// This could be another function if repeated many
// times in the module
if (internal_state_variable)
{
internal_state_variable = false;
}
}
int internal_foo(bool arg)
{
// Do stuff
if (arg)
{
// Do some more stuff
}
}
Although second implementation is more testable wrt to internal_foo as it has no sideeffects but it makes bar uglier and requires smaller functions that make it hard for the reader to even follow small snippets as he has to constantly shift attention to different functions.
Which one do you think is better ? Compare this to writing OOPS code, the private functions most of the time use internal state and are not pure. Testing is done by setting up internal state on a mock object instance and testing the private function. I am getting a little confused on whether to use or whether to pass in internal state to private functions for the sake of "testability"
Whenever writing automated tests, ideally we want to focus on testing the specification of that unit of code, not the implementation (otherwise we create fragile tests that will break whenever we modify the implementation). Therefore, what happens internally in the object should not be of concern to the test.
For this example, I would look to build a test that:
Executes the test by calling outer_API_bar.
Asserts that the correct behavior of the call using other publicly accessible functions and/or state (there must be some way of doing this, as if the only side effect of calling outer_API_bar was internal to this unit of code, then calling this function could not impact your wider application in any way, and essentially be useless).
This way, you are able to keep the fact that you use functions like internal_foo, and variables like internal_state_variable as implementation details, which you can freely change when refactoring your code (i.e. to make it more readable) without having to change your tests.
NOTE: This suggestion is based on my own personal preference for only testing public functions, and not private ones. You will find much debate on this topic where some people pose good arguments for testing private functions being a valid thing to do.
To answer your question very specifically pure functions are waaaaay more 'testable' than any other kind of abstraction. The more pure functions you can include, the more testable your code would be. As you rightly mention, this can come at the cost of readability, and I am sure there are other trade offs to consider. My suggestion would be to aim for more pure functions and look for other techniques that would allow you to compensate on the readability side of things.
Both snippets are testable via mocks. The second one, however, has the advantage that you can also check the argument of internal_foo(bool arg) for an expected value of true or false when the mock for internal_foo() is invoked. In my opinion, that would make for a more meaningful test.
Depending on the rest of the code that we don't know, testing without mocks may be more difficult.

Terminate activity diagram from subactivity

I´m trying to draw an UML activity diagram for a fnction that is (highly simplified) represented by the following code snippet. My intention is to have a subactivity for the lines that check the mode parameter (if-else).
ErrorType DoSomething(int mode) {
if(mode==MODE1) {
...
}
else {
return MODE_NOT_AVAILABLE;
}
SomethingElse...
return NO_ERROR;
}
You can see, the return-Statement in the else-Block leads to termination of function DoSomething. So if it´s executed, there is no way for SomethingElse... to be executed.
As I mentioned, this else-block should be in a subactivity.
How do I visualize that an action in a subactivity (return MODE_NOT_AVAILABLE) has the consequence that it´s parental activity diagram has to be in a final state?
In the following picture you can see my try to solve it. Is this a correct solution?
Since you are dealing with some kind of exception, I'd model it with an exception handler like you see here http://www.sparxsystems.com.au/images/screenshots/uml2_tutorial/ad11.GIF. Even though your concrete implementation uses if/else, that should be a way which makes it easy to understand what you want to achieve (prevent the subroutine from being executed in wrong mode).
You can see more details about the notation here: http://edn.embarcadero.com/article/30169
It depends on how much you want to dictate the actual implementation. UML itself is langage-unaware, and so are most stakeholders.

Unit testing opaque structure based C API

I have a library I wrote with API based on opaque structures. Using opaque structures has a lot of benefits and I am very happy with it.
Now that my API are stable in term of specifications, I'd like to write a complete battery of unit test to ensure a solid base before releasing it.
My concern is simple, how do you unit test API based on opaque structures where the main goal is to hide the internal logic?
For example, let's take a very simple object, an array with a very simple test:
WSArray a = WSArrayCreate();
int foo = 5;
WSArrayAppendValue(a, &foo);
int *bar = WSArrayGetValueAtIndex(a, 0);
if(&foo != bar)
printf("Eroneous value returned\n");
else
printf("Good value returned\n");
WSRelease(a);
Of course, this tests some facts, like the array actually acts as wanted with 1 value, but when I write unit tests, at least in C, I usualy compare the memory footprint of my datastructures with a known state.
In my example, I don't know if some internal state of the array is broken.
How would you handle that? I'd really like to avoid adding codes in the implementation files only for unit testings, I really emphasis loose coupling of modules, and injecting unit tests into the implementation would seem rather invasive to me.
My first thought was to include the implementation file into my unit test, linking my unit test statically to my library.
For example:
#include <WS/WS.h>
#include <WS/Collection/Array.c>
static void TestArray(void)
{
WSArray a = WSArrayCreate();
/* Structure members are available because we included Array.c */
printf("%d\n", a->count);
}
Is that a good idea?
Of course, the unit tests won't benefit from encapsulation, but they are here to ensure it's actually working.
I would test only the API, and focus on testing every possible corner case.
I can see the interest in checking that the memory structures hold what you expect. If you do this you will be tightly coupling the tests to the specifics of the implementation and I think creating a lot of long-term maintenance.
My thought here is that the API is the contract and if you fulfil that then yoru code is working. If you change the implementation later then presumably one of the things you need to know is that the contract is maintained. Your unit tests will verify that.
Your unit tests shouldn't depend on the internal details of the code that they're testing. Your initial example is actually a pretty good test. It does one thing, then verifies that the state of the object is as expected.
You'd want to create tests that verify the behavior of other parts of the API as well, of course. Fir example, in the array case, you'd want to have test cases that verify that the length if the array is reported correctly after adding and removing items.
Writing unit tests that depend on an exact match with a known good memory snapshot is generally a really bad idea, in that every implementation change will cause the tests to fail. If you do decide to use snapshot-based tests, make sure there's an easy to regenerate the "known good" snapshots.
I would suggest splitting the unit testing into white box and black box unit testing. The white box testing focuses on the API interface, and correctness of results, while the black box testing focuses on the internals.
To facilitate this I use a private header (e.g. example_priv.h), with a #ifdef TESTING for function prototypes that are other internal / private. Thus you can exercise internal functions for unit testing purposes, without exposing them in the general case.
The only loss with this method is losing the ability to explicitly label the internal functions as static in their source file.
I hope that is helpful.

Most difficult programming explanation

Recently I tried to explain some poorly designed code to my project manager. All of the manager classes are singletons ("and that's why I can't easily change this") and the code uses event dispatching everywhere that a function call would have sufficed ("and that's why it's so hard to debug"). Sadly it just came out as a fumbling mess of English.
Whats the most difficult thing you've had to convey to a non-technical person as a programmer? Did you find any analogies or ways of explaining that made it clearer?
Thread Synchronization and Dead-Locking.
Spending time on design, and spending time on refactoring.
Refactoring produces no client-visible work at all, which makes it the hardest thing in the project to justify working on.
As a second "not client-visible" problem, unit testing.
I was asked how the internet worked - I responded with "SYN, ACK, ACK". Keep forgetting it's SYN, SYN-ACK, ACK..
(source: inetdaemon.com)
My most difficult question began innocently enough: my girlfriend asked how text is rendered in Firefox. I answered simply with something along the lines of "rendering engine, Gecko, HTML parser, blah blah blah."
Then it went downhill. "Well how does Gecko know what to display then?"
It spiraled from there quite literally down to the graphics drivers, operating system, compilers, hardware archiectures, and the raw 1s and 0s. I not only realized there were significant gaps in my own knowledge of the layering hierarchy, but also how, in the end, I had left her (and me!) more confused than when I began.
I should've initially answered "turtles all the way down" and stuck with that. :P
I had a fun case of trying to explain why a program wasn't behaving as expected when some records in a database had empty strings and some were NULL. I think their head just about exploded when I told them empty string is just a string with 0 bytes in it while NULL means unknown value and so you can't actually compare it to anything.
Afterward I had one nasty headache.
1.) SQL: Thinking in sets, rather than procedurally (it's hard enough for us programmers to grasp!).
2.) ...and here's a great example of demystifing technical concepts:
How I explained REST to my wife
A lot of statements starting with "It's because in Oracle, ..." come to my mind.
The biggest hurdles are around "technological debt", especially about how the architecture was correct for this version but needs to be changed for next version. This is similar to the problem of explaining "prototype versus production" and "version 1.0 versus version 2.0".
Worst mistake I ever made was doing a UI mockup in NeXT steps UI Builder. It looked exactly like the end product would look and had some behaviour. Trying to explain that there was 6 months of work remaining after that was very difficult.
How recursion works...
Why code like this is bad:
private void button1_Click(object sender, EventArgs e)
{
System.Threading.ThreadStart start =
new System.Threading.ThreadStart(SomeFunction);
System.Threading.Thread thread = new System.Threading.Thread(start);
_SomeFunctionFinished = false;
thread.Start();
while (!_SomeFunctionFinished)
{
System.Threading.Thread.Sleep(1000);
}
// do something else that can only be done after SomeFunction() is finished
}
private bool _SomeFunctionFinished;
private void SomeFunction()
{
// do some elaborate $##%#
_SomeFunctionFinished = true;
}
Update: what this code should be:
private void button1_Click(object sender, EventArgs e)
{
SomeFunction();
// do something else that can only be done after SomeFunction() is finished
}
private void SomeFunction()
{
// do some elaborate $##%#
}
The importance of unit tests.
"Adding a new programmer a month to this late task will make it ship later. Never mind, read this book." (The Mythical Man-Month.) Managers still don't quite get it.
The concept of recursion - some people get it really hard.
I sometimes really have hard time explaining the concept of covariance/contravariance and the problems related to them to fellow programmers.
Convincing a friend that the Facebook application I developed really doesn't store her personal data (e.g. name) even though still displays it.
Why it'll take another four weeks to put this app into production. After all, it only took a week to do the rapid prototype. It "works" (or at least looks like it does) so I should be pretty much finished, shouldn't I?
Explanations that involve security, code quality (maintainability), normalized DB schemas, testing, etc. usually come off as a list of abstractions that don't have any visible effect on the app, so it's hard to explain what they really contribute to the project and why they're necessary. Sometimes analogies can only take you so far.
C pointers
*i
&i
Avoiding Dead-Locking in a multi-threaded environment.
I cleared confusion by explaining it visually on a white-board, drawing out two parallel lines and showing what happens when the reach the same points at the same time.
Also role-playing two threads with the person I was explaining it to, and using physical objects (book, coffee mug, etc) to show what happens when we both try to use something at once.
There's really no right or wrong answer-proper for this... it's all experiences.
The hardest thing I have had to explain to a non-tech person was why he couldn't get to his website when traveling abroad but his family member that lived there (with a totally different provider) could get to it. Somehow, "Fail in Finland" wasn't good enough.
The most difficult concepts to explain to people I would label programmers as opposed to developers are some of the most core paradigms of object orientated design. Most specifically abstraction, encapsulation and the king, polymorphism and how to use them correctly.
Expanding on that is the level of complexity of explaining what Inversion of Control is and why it is an absolute need and not just extra layers of code that doesn't do anything.
I was going to comment on Mikael's post, that some people just take the sequential programming and unfortunately just stay with that.
But that really means: two seriously hard to explain concepts:
monads in haskell (usually starting with: "That's like a function that returns a function that does what you really wanted to do, but ...")
deferreds in twisted/python ("That's like... ehhh... Just use it for a year or so and you'll get it" ;) )
Trying to explain why code was executed sequentially at all. Seemingly this is not at all intuitive for some non-programmers (i.e. my girlfriend).
Why you do not need character correct index handling in most cases when you use UTF-8 strings.
It's hard to explain why most software has bugs. Many non-technical people have no idea how complex software is, and how easy it is to overlook unexpected conditions. They think we are just too lazy to fix stuff that we know is broken.
There are 10 different types of people in the world.
The people who understand Binary and the people who dont....
To put it plainly, why development is the most difficult concept ever exposed to man kind. Not related to any programming language, but in general. And no I am not trying to provide myself or you with an ego boost, the only real limitations to this field is your mind.
Why? We don't work with constants and there are no boundaries, the only reason an AI that thinks like a human being doesn't exist yet is due to our own limitations. All other aspects need to adhere to some sort of law, development doesn't care about the laws of physics or any law for that matter hence the term development... evolution.

Resources