Writing Angular Unit Tests After the App is Written? - angularjs

I've inherited a medium sized angular app. It looks pretty well organized and well written but there's no documentation or unit tests implemented.
I'm going to make an effort to write unit tests posthumously and eventually work in e2e tests and documentation via ngdoc.
I'm wondering what the best approach to writing unit tests is after the fact. Would you start with services and factories, then directives etc or some other strategy? I plan on using Jasmine as my testing framework.
I should also mention that I've only been with the code for a few days so I'm not 100% sure how everything ties together yet.

At the end of the day what you need to know is that your software works correctly, consistently, and within business constraints. That's all that matters, and TDD is just a tool to help get you there. So, that's the frame of mind from which I'm approaching this answer.
If there are any known bugs, I'd start there. Cover the current, or intended, functionality with tests and work to fix the bugs as you go.
After that, or if there aren't any currently known bugs, then I'd worry about adding tests as you begin to maintain the code and make changes. Add tests to cover the current, correct functionality to make sure you don't break it in unexpected ways.
In general, writing tests to cover things that appear to be working, just so that you can have test coverage, won't be a good use of time. While it might feel good, the point of tests is to tell you when something is wrong. So, if you already have working code and you never change it, then writing tests to cover it won't make the code any less buggy. Going over the code by hand might uncover as yet undiscovered bugs, but that has nothing to do with TDD.
That doesn't mean that writing tests after the fact should never be done, but exhaustive tests after the fact seems a bit overkill.
And if none of that advice applies to your particular situation, but you want to add tests anyway, then start with the most critical/dangerous parts of the code - the pieces that, if something goes wrong you're going to be especially screwed, and make sure those sections are rock-solid.

I was recently in a similar situation with an AngularJS app. Apparently, one of the main rules of AngularJS development is TDD Always. We didn't learn about this until later though, after development had been ongoing for more than six months.
I did try adding some tests later on, and it was difficult. The most difficult aspect is that your code is less likely to be written in a way that's easily testable. This means a lot of refactoring (or rewriting) is in order.
Obviously, you won't have a lot of time to spend reverse engineering everything to add in tests, so I'd suggest following these guidelines:
Identify the most problematic areas of the application. This is the stuff that always seems to break whenever someone makes a change.
Order this list by importance, so that the most important components are at the top of your list and the lesser items are at the bottom.
Next, order by items that are less complex.
Start adding tests in slowly, from the top of your list.
You want to start with the most important yet least complex so you can get some easy wins. Things with a lot of dependencies may also need to be refactored in the process, so things that are already dependent on less should be easier to refactor and test.
In the end, it seems best to start with unit testing from the beginning, but when that's not possible, I've found this approach to be helpful.

Related

AngularJS: same logic but different HTML forms & database for different projects/clients

The app which I am developing will have the same functionality for all users/clients/projects (call 'em what you will).
However, the HTML forms presented to the user and the AJAX used to send them to the server will vary for each project.
I was thinking of using Angular constants, with ng-show / ng-hide (maybe even ng-if) on the HTML and a switch in the controller, based on a constant for the AJAX send & receive.
Is this a good approach? I can see things getting complex with more than a handful of projects. Should I use a different view/controller for each project? I might lose out on some common code that way, but it's less likely t become spaghetti.
I would suggest taking a domain/(test) driven approach. Don't generalize too much code up front.
Building generalized code will create dependencies that are all a potential victim in need of future refactoring. Even in the case of simple changes. Nothing is more time consuming / frustrating than those small modifications that cause an avalanche. I've seen alot of projects run out of time because the code base was over engineered at the start.
My approach, especially for the more complex projects with no clear overview of overlapping functionalities, is to just start designing/building the functionalities separate of each other in small steps. Just like any agile workflow, deliver a complete working feature (a working form) and when you're working on feature and notice that there's a shared functionality build earlier on, make a (wise) decision to refactor/promote the existing code into generalized code. At this stage you'll be in a better position to make such a judgement. If you've taken the test driven approach (which I highly recommend) refactoring the existing code can be done without too much effort.
Working this way gives a greater guarantee to deliver and to end up with good readable optimised code.
TL;DR
It all comes down to common sense.

My jasmine tests are far too brittle

I've been Googling this all afternoon but have been struggling to find a viable solution to my problem.
Basically, I've started some Angular development and have a controller of about 700 lines. My spec file is around 2100 lines and I have 100% code coverage. I've been finding that any time I have to change the controller, I have to fix about 5 unit tests. They're not finding me bugs.
I don't use a TDD methodology, maybe this is the problem? The code gets written and then the tests get written.
I ask because anywhere I read online, the general consensus is unit testing is great. I'm just not seeing any value to them any minute and they're costing me too much time.
If anyone has any advice, I'd greatly appreciate it.
Separate your concerns
A 700 lines controller and a 2100 lines spec files pretty much means that you adhere little to the separation of concerns principle (or the single responsibility principle).
This is a common problem in Angular, where developers tend to pollute the scope and controllers, rather than allocating responsibilities to services and directives. A good practice is to have controllers initiate the scope and provide event handlers, but most of the logic should reside in services, unless it specific to the actual view of the controller (reusable view logic goes in directives, reusable 'business' logic goes in services).
In turn, this often translates to high level of dependencies, which evident in the fact that 5 tests are failing with a single change. In theory, proper unit testing means that a single change should only fail a single test; this is hardly the case in practice since we are lazy mocking all the dependencies of a unit, but not a big deal since the dependent failing tests will pass once we fix the problematic code.
Non TDD tests
Having tests but not following TDD is better than not having tests at all, but little ideal.
Tests generally serve 3 purposes (the 3 Ds):
Design - By writing tests first, you first define what the unit should do, only then implement it.
Defence - By writing tests, you ensure that you can change the code, but not its behaviour.
Documentation - Tests serve as documentation to what units should do: 'It should ...'.
Unless part of TDD, you are missing the design bit and most of the documentation bit.
Writing tests as an after thought often doesn't ensure the code is working right, but rather that it works as it does. It is very tempting in post-coding tests to provide a function a certain input, make the test fail but see what was the actual output and make this output the expected output - it doesn't mean that the output is correct, just that it is what it is. This doesn't happen when you write tests first.

When to mock database access

What I've done many times when testing database calls is setup a database, open a transaction and rollback it at the end. I've even used an in-memory sqlite db that I create and destroy around each test. And this works and is relatively quick.
My question is: Should I mock the database calls, should I use the technique above or should I use both - one for unit test, one for integration tests (which, to me at least, seems double work).
the problem is that if you use your technique of setting up a database, opening transactions and rolling back, your unit tests will rely on database service, connections, transactions, network and such. If you mock this out, there is no dependency to other pieces of code in your application and there are no external factors influencing your unit-test results.
The goal of a unit test is to test the smallest testable piece of code without involving other application logic. This cannot be achieved when using your technique IMO.
Making your code testable by abstracting your data layer, is a good practice. It will make your code more robust and easier to maintain. If you implement a repository pattern, mocking out your database calls is fairly easy.
Also unit-test and integration tests serve different needs. Unit tests are to prove that a piece of code is technically working, and to catch corner-cases.
Integration tests verify the interfaces between components against a software design. Unit-tests alone cannot verify the functionality of a piece of software.
HTH
All I have to add to #Stephane's answer is: it depends on how you fit unit testing into your own personal development practices. If you've got end-to-end integration tests involving a real database which you create and tidy up as needed - provided you've covered all the different paths through your code and the various eventualities which could occur with your users hacking post data, etc. - you're covered from a point of view of your tests telling you if your system is working, which is probably the main reason for having tests.
I would guess though that having each of your tests run through every layer of your system makes test-driven development very difficult. Needing every layer in place and working in order for a test to pass pretty much excludes spending a few minutes writing a test, a few minutes making it pass, and repeating. This means your tests can't guide you in terms of how individual components behave and interact; your tests won't force you to make things loosely-coupled, for example. Also, say you add a new feature and something breaks elsewhere; granular tests which run against components in isolation make tracking down what went wrong much easier.
For these reasons I'd say it's worth the "double work" of creating and maintaining both integration and unit tests, with your DAL mocked or stubbed in the latter.

Is there a framework for running unit tests on Apache C modules?

I am about to make some changes to an existing Apache C module to fix some possible security flaws and general bad practices. However the functionality of the code must remain unchanged (except in cases where its fixing a bug). Standard regression testing stuff seems to be in order. I would like to know if anyone knows of a good way to run some regression unit tests againt the code. I'm thinking something along the lines of using C-Unit but with all the tie ins to the Apache APR and status structures I was wondering if there is a good way to test this. Are there any pre-built frameworks that can be used with C-unit for example?
Thanks
Peter
I've been thinking of answering this for a while, but figured someone else might come up with a better answer, because mine is rather unsatisfactory: no, I'm not aware of any such unit testing framework.
I think your best bet is to try and refactor your C module such that it's dependencies on the httpd code base are contained in a very thin glue layer. I wouldn't worry too much about dependencies on APR, that can easily be linked into your unit test code. It's things like using the request record that you should try to abstract out a little bit.
I'll go as far and suggest that such a refactoring is a good idea if the code is suspected to contain security flaws and bad practices. It's just usually a pretty big job.
What you might also consider is running integration tests rather than unit tests (ideally both), i.e. come up with a set of requests and expected responses from the server, and run a program to compare the actual to expected responses.
So, not the response you've been looking for, and you probably thought of something along this line yourself. But at least I can tell you from experience that if the module can't be replaced with something new for business reasons, then refactoring it for testability will likely pay off in the longer term.
Spent some time looking around the interwebs for you as its a question i was curious myself. came across a wiki article stating that
http://cutest.sourceforge.net/
was used for the apache portable c runtime testing. might be worth checking that out.

Does TDD apply well when developing an UI?

What are your opinions and experiences regarding using TDD when developing an user interface?
I have been pondering about this question for some time now and just can't reach a final decision. We are about to start a Silverlight project, and I checked out the Microsoft Silverlight Unit Test Framework with TDD in mind, but I am not sure how to apply the approach to UI-development in general - or to Silverlight in particular.
EDIT:
The question is about if it is practical thing to use TDD for UI-development, not about how to do separation of concerns.
Trying to test the exact placement of UI components is pointless. First because layout is subjective and should be "tested" by humans. Second, because as the UI changes you'll be constantly rewriting your tests.
Similarly, don't test the GUI components themselves, unless you're writing new components. Trust the framework to do its job.
Instead, you should be testing the behavior that underlies those components: the controllers and models that make up your application. Using TDD in this case drives you toward a separation of concerns, so that your model is truly a data management object and your controller is truly a behavior object, and neither of them are tightly coupled to the UI.
I look at TDD from a UI perspective more from the bare acceptance criteria for the UI to pass. In some circles, this is being labeled as ATDD or Acceptance Test Driven Development.
Biggest over-engineering I have found in using TDD for UIs is when I got all excited about using automated tests to test look and feel issues. My advice: don't! Focus on testing the behavior: this click produces these events, this data is available or displayed (but not how it's displayed). Look and Feel really is the domain of your independent testing team.
The key is to focus your energy on "High Value Add" activities. Automated style tests are more of a debt (keeping them up to date) than a value add.
If you separate you logic from the actual GUI code, you can easily use TDD to build the logic and it will be a lot easier to build another interface on top of your logic as well, if you ever need that.
I can't speak to Microsoft Silverlight, but I never use TDD for any kind of layout issues, just is not worth the time. What works well is using Unit Testing for checking any kind of wiring, validation and UI logic that you implemented. Most systems provide you with programmatic access to the actions the user takes, you can use these to assert that your expectations are correctly met. E.g. calling the click() method on a button should execute what ever code you intended. Selecting an item in a list view changes all the UI elements content to this items properties ...
Based on your edit, here's a little more detail about how we do it on my current team. I'm doing Java with GWT, so the application to Silverlight might be a bit off.
Requirement or bug comes in to the developer. If there is a UI change (L&F) we do a quick mock up of the UI change and send it over to the product owner for approval. While we are waiting on that, we start the TDD process.
We start with at least on of either a Web test (using Selenium to drive user clicks in a browser), or a "headless" functional test using Concordion, FiT or something like it. Once that's done and failing, we have a high level vision of where to attack the underlying services in order to make the system work right.
Next step is to dig down and write some failing unit and integration tests (I think of unit tests as stand-alone, no dependencies, no data, etc. Integration tests are fully wired tests that read/write to the database, etc.)
Then, I make it work from bottom up. Sounds like your TDD background will let you extrapolate the benefits here. Refactor on the way up as well....
I think this blog post by Ayende Rahien answers my question nicely using a pragmatic and sound approach. Here are a few quotes from the post:
Testing UI, for example, is a common
place where it is just not worth the
time and effort.
...
Code quality, flexibility and the
ability to change are other things
that are often attributed to tests.
They certainly help, but they are by
no mean the only (or even the best)
way to approach that.
Tests should only be used when they add value to the project, without becoming the primary focus. I am finally quite certain that using test-driven development for the UI can quickly become the source of much work that is simply not worth it.
Note that it seems the post is mainly about testing AFTER things have been built, not BEFORE (as in TDD) - but I think the following golden rule still applies: The most important things deserve the greatest effort, and less important things deserve less effort. Having a unit-tested UI is often not THAT important, and as Ayende writes, the benefit of using TDD as development model is probably not so great - especially when you think of that developing an UI is normally a top-down process.
GUIs by their very nature are difficult to test, so as Brian Rasmussen suggests keep the dialog box code separate from the the GUI code.
This is the Humble Dialog Box Pattern.
For example, you may have a dialog box where details (e.g. credit card number) are input and you need to verify them. For this case you would put the code that checks the credit card number with the Luhn algorithm into a separate object which you test. (The algorithm in question just tests if the number is plausible - it's designed for checking for transcription errors.)
At my workplace we use TDD and we actually unit test our UI code (for a web application) thanks to Apache Wicket's WicketTester but it's not testing that some descriptive label is before a text field or something like that, instead we test that hierarchy of the components is somewhat correct ("label is in the same subset as text field") and the components are what they're supposed to be ("this label really is a label").
Like others have already said, it's up to the real person to determine how those components are placed on the UI, not the programmer, especially since programmers have a tendency of doing all-in-one/obscure command line tools instead of UIs which are easy to use.
Test-Driven Development lends itself more to developing code than for developing user-interfaces. There are a few different ways in which TDD is performed, but the preferred way of true TDD is to write your tests first, then write the code to pass the tests. This is done iteratively throughout development.
Personally, I'm unsure of how you would go about performing TDD for UI's; however, the team that I am on performs automated simulation tests of our UIs. Basically, we have a suite of simulations that are run every hour on the most recent build of the application. These tests perform common actions and verify that certain elements, phrases, dialogs, etc, etc, properly occur based on, say, a set of use cases.
Of course, this does have its disadvantages. The downside to this is that the simulations are locked into the code representing the case. It leaves little room for variance and is basically saying that it expects the user to do exactly this behavior with respect to this feature.
Some testing is better than no testing, but it could be better.
Yes, you can use TDD with great effect for GUI testing of web apps.
When testing GUI's you typically use stub/fake data that lets you test all the different state changes in your gui. You must separate your business logic from your gui, because in this case you will want to mock out your business logic.
This is really neat for catching those things the testers always forget clicking at; they get test blindness too !

Resources