Log calls to mocked methods with Easymock - easymock

Is it possible to log all calls to mocked methods, and preferably from where it was called? (Without spending a lot of time on some own implementation.)
That would sometimes be very helpful for debugging.
(And yes, my need for it probably indicates that the code are too complex and the test covers too much functionality. But sometimes you have to live with bad code.)

With EasyMock it's quite easy. Anything not recorded will throw an exception from where it is called.
If you want to log any calls, you do not need a mock for that. Just a basic proxy will do it. I can give code example if needed.

Related

Is there a reason to encapsulate if I am the only one using my code?

I understand that we encapsulate data to prevent things from being accessed that don't need to be accessed by developers working with my code. However I only program as a hobby and do not release any of code to be used by other people. I still encapsulate, but it mostly just seems to me like I'm just doing it for the sake of good policy and building the habit. So, is there any reason to encapsulate data when I know I am the only one who will be using my code?
Encapsulation not only about hiding data.
It is also about hiding details of implementation.
When such details are hidden, that forces you to use defined class API and the class is only who can change it inside.
So just imagine a situation, when you opened all methods to any class interested in them and you have a function that performs some calculations. And you've just realized that you want to replace it because the logic is not right, or you want to perform some complicated calculations.
In such cases sometimes you have to change all the places across your application to change the result instead of changing it in only one place, in API, that you provided.
So don't make everything public, it leads to strong coupling and pain during update process.
Encapsulation is not only creating "getters" and "setters", but also exposing a sort of API to access the data (if needed).
Encapsulation lets you keep access to the data in one place and allow you to manage it in a more "abstract" way, reducing errors and making your code more maintainable.
If your personal projects are simple and small, you can do whatever you feel like in order to produce fast what you need, but bear in mind the consequences ;)
I don't think unnecessary data access can happen only by third party developers. It can happen by you as well right? When you allow direct access to data through access rights on variables/properties, whoever is working with that, be it you, or someone else may end up creating bugs by accessing data directly.

My jasmine tests are far too brittle

I've been Googling this all afternoon but have been struggling to find a viable solution to my problem.
Basically, I've started some Angular development and have a controller of about 700 lines. My spec file is around 2100 lines and I have 100% code coverage. I've been finding that any time I have to change the controller, I have to fix about 5 unit tests. They're not finding me bugs.
I don't use a TDD methodology, maybe this is the problem? The code gets written and then the tests get written.
I ask because anywhere I read online, the general consensus is unit testing is great. I'm just not seeing any value to them any minute and they're costing me too much time.
If anyone has any advice, I'd greatly appreciate it.
Separate your concerns
A 700 lines controller and a 2100 lines spec files pretty much means that you adhere little to the separation of concerns principle (or the single responsibility principle).
This is a common problem in Angular, where developers tend to pollute the scope and controllers, rather than allocating responsibilities to services and directives. A good practice is to have controllers initiate the scope and provide event handlers, but most of the logic should reside in services, unless it specific to the actual view of the controller (reusable view logic goes in directives, reusable 'business' logic goes in services).
In turn, this often translates to high level of dependencies, which evident in the fact that 5 tests are failing with a single change. In theory, proper unit testing means that a single change should only fail a single test; this is hardly the case in practice since we are lazy mocking all the dependencies of a unit, but not a big deal since the dependent failing tests will pass once we fix the problematic code.
Non TDD tests
Having tests but not following TDD is better than not having tests at all, but little ideal.
Tests generally serve 3 purposes (the 3 Ds):
Design - By writing tests first, you first define what the unit should do, only then implement it.
Defence - By writing tests, you ensure that you can change the code, but not its behaviour.
Documentation - Tests serve as documentation to what units should do: 'It should ...'.
Unless part of TDD, you are missing the design bit and most of the documentation bit.
Writing tests as an after thought often doesn't ensure the code is working right, but rather that it works as it does. It is very tempting in post-coding tests to provide a function a certain input, make the test fail but see what was the actual output and make this output the expected output - it doesn't mean that the output is correct, just that it is what it is. This doesn't happen when you write tests first.

Writing Angular Unit Tests After the App is Written?

I've inherited a medium sized angular app. It looks pretty well organized and well written but there's no documentation or unit tests implemented.
I'm going to make an effort to write unit tests posthumously and eventually work in e2e tests and documentation via ngdoc.
I'm wondering what the best approach to writing unit tests is after the fact. Would you start with services and factories, then directives etc or some other strategy? I plan on using Jasmine as my testing framework.
I should also mention that I've only been with the code for a few days so I'm not 100% sure how everything ties together yet.
At the end of the day what you need to know is that your software works correctly, consistently, and within business constraints. That's all that matters, and TDD is just a tool to help get you there. So, that's the frame of mind from which I'm approaching this answer.
If there are any known bugs, I'd start there. Cover the current, or intended, functionality with tests and work to fix the bugs as you go.
After that, or if there aren't any currently known bugs, then I'd worry about adding tests as you begin to maintain the code and make changes. Add tests to cover the current, correct functionality to make sure you don't break it in unexpected ways.
In general, writing tests to cover things that appear to be working, just so that you can have test coverage, won't be a good use of time. While it might feel good, the point of tests is to tell you when something is wrong. So, if you already have working code and you never change it, then writing tests to cover it won't make the code any less buggy. Going over the code by hand might uncover as yet undiscovered bugs, but that has nothing to do with TDD.
That doesn't mean that writing tests after the fact should never be done, but exhaustive tests after the fact seems a bit overkill.
And if none of that advice applies to your particular situation, but you want to add tests anyway, then start with the most critical/dangerous parts of the code - the pieces that, if something goes wrong you're going to be especially screwed, and make sure those sections are rock-solid.
I was recently in a similar situation with an AngularJS app. Apparently, one of the main rules of AngularJS development is TDD Always. We didn't learn about this until later though, after development had been ongoing for more than six months.
I did try adding some tests later on, and it was difficult. The most difficult aspect is that your code is less likely to be written in a way that's easily testable. This means a lot of refactoring (or rewriting) is in order.
Obviously, you won't have a lot of time to spend reverse engineering everything to add in tests, so I'd suggest following these guidelines:
Identify the most problematic areas of the application. This is the stuff that always seems to break whenever someone makes a change.
Order this list by importance, so that the most important components are at the top of your list and the lesser items are at the bottom.
Next, order by items that are less complex.
Start adding tests in slowly, from the top of your list.
You want to start with the most important yet least complex so you can get some easy wins. Things with a lot of dependencies may also need to be refactored in the process, so things that are already dependent on less should be easier to refactor and test.
In the end, it seems best to start with unit testing from the beginning, but when that's not possible, I've found this approach to be helpful.

Caching TaskScheduler retrieved FromCurrentSynchronizationContext in WPF app

Does it makes sense to store/cache the TaskScheduler returned from TaskScheduler.FromCurrentSynchronizationContext while loading a WPF app and use it every from there on? Are there any drawbacks of this kind of usage?
What I mean from caching is to store a reference to the TaskScheduler in a singleton and make it available to all parts of my app, probably with the help of a DI/IoC container or worst case in a bare ol' singleton.
As Drew says, there's no performance benefit to doing this. But there might be other reasons to hold onto a TaskScheduler.
One reason you might want to do it is that by the time you need a TaskScheduler it may be too late to call FromCurrentSynchronizationContext because you may no longer be able to be certain that you are in the right context. (E.g., perhaps you can guarantee that your constructor runs in the right context, but you have no guarantees about when any of the other methods of your class are called.)
Since the only way to obtain a TaskScheduler for a SynchronizationContext is through the FromCurrentSynchronizationContext method, you would need to store a reference to the TaskScheduler itself, rather than just grabbing SynchronizationContext.Current during your constructor. But I'd probably avoid calling this "caching" because that word implies that you're doing it for performance reasons, when in fact you'd be doing it out of necessity.
Another possibility is that you might have code that has no business knowing which particular TaskScheduler it is using, but which still needs to use a scheduler because it fires off new tasks. (If you start new tasks, you're choosing a scheduler even if you don't realise it. If you don't explicitly choose which scheduler to use, you'll use the default one, which isn't always the right thing to do.) I've written code where this is the case: I've written methods that accept a TaskScheduler object as an argument and will use that. So this is another scenario where you might want to keep hold of a refernce to a scheduler. (I was using it because I wanted certain IO operations to happen on a particular thread, so I was using a custom scheduler.)
Having said all that, an application-wide singleton doesn't sound like a great idea to me, because it tends to make testing harder. And it also implies that the code grabbing that shared scheduler is making assumptions about which scheduler it should be using, and that might be a code smell.
The underlying implementation of FromCurrentSynchronizationContext just instantiates an instance of an internal class named SynchronizationContextTaskScheduler which is extremely lightweight. All it does is cache the SynchronizationContext it finds when constructed and then the QueueTask implementation simply does a Post to that SynchronizationContext to execute the Task.
So, all that said, I would not bother caching these instances at all.

Is there a framework for running unit tests on Apache C modules?

I am about to make some changes to an existing Apache C module to fix some possible security flaws and general bad practices. However the functionality of the code must remain unchanged (except in cases where its fixing a bug). Standard regression testing stuff seems to be in order. I would like to know if anyone knows of a good way to run some regression unit tests againt the code. I'm thinking something along the lines of using C-Unit but with all the tie ins to the Apache APR and status structures I was wondering if there is a good way to test this. Are there any pre-built frameworks that can be used with C-unit for example?
Thanks
Peter
I've been thinking of answering this for a while, but figured someone else might come up with a better answer, because mine is rather unsatisfactory: no, I'm not aware of any such unit testing framework.
I think your best bet is to try and refactor your C module such that it's dependencies on the httpd code base are contained in a very thin glue layer. I wouldn't worry too much about dependencies on APR, that can easily be linked into your unit test code. It's things like using the request record that you should try to abstract out a little bit.
I'll go as far and suggest that such a refactoring is a good idea if the code is suspected to contain security flaws and bad practices. It's just usually a pretty big job.
What you might also consider is running integration tests rather than unit tests (ideally both), i.e. come up with a set of requests and expected responses from the server, and run a program to compare the actual to expected responses.
So, not the response you've been looking for, and you probably thought of something along this line yourself. But at least I can tell you from experience that if the module can't be replaced with something new for business reasons, then refactoring it for testability will likely pay off in the longer term.
Spent some time looking around the interwebs for you as its a question i was curious myself. came across a wiki article stating that
http://cutest.sourceforge.net/
was used for the apache portable c runtime testing. might be worth checking that out.

Resources