My jasmine tests are far too brittle - angularjs

I've been Googling this all afternoon but have been struggling to find a viable solution to my problem.
Basically, I've started some Angular development and have a controller of about 700 lines. My spec file is around 2100 lines and I have 100% code coverage. I've been finding that any time I have to change the controller, I have to fix about 5 unit tests. They're not finding me bugs.
I don't use a TDD methodology, maybe this is the problem? The code gets written and then the tests get written.
I ask because anywhere I read online, the general consensus is unit testing is great. I'm just not seeing any value to them any minute and they're costing me too much time.
If anyone has any advice, I'd greatly appreciate it.

Separate your concerns
A 700 lines controller and a 2100 lines spec files pretty much means that you adhere little to the separation of concerns principle (or the single responsibility principle).
This is a common problem in Angular, where developers tend to pollute the scope and controllers, rather than allocating responsibilities to services and directives. A good practice is to have controllers initiate the scope and provide event handlers, but most of the logic should reside in services, unless it specific to the actual view of the controller (reusable view logic goes in directives, reusable 'business' logic goes in services).
In turn, this often translates to high level of dependencies, which evident in the fact that 5 tests are failing with a single change. In theory, proper unit testing means that a single change should only fail a single test; this is hardly the case in practice since we are lazy mocking all the dependencies of a unit, but not a big deal since the dependent failing tests will pass once we fix the problematic code.
Non TDD tests
Having tests but not following TDD is better than not having tests at all, but little ideal.
Tests generally serve 3 purposes (the 3 Ds):
Design - By writing tests first, you first define what the unit should do, only then implement it.
Defence - By writing tests, you ensure that you can change the code, but not its behaviour.
Documentation - Tests serve as documentation to what units should do: 'It should ...'.
Unless part of TDD, you are missing the design bit and most of the documentation bit.
Writing tests as an after thought often doesn't ensure the code is working right, but rather that it works as it does. It is very tempting in post-coding tests to provide a function a certain input, make the test fail but see what was the actual output and make this output the expected output - it doesn't mean that the output is correct, just that it is what it is. This doesn't happen when you write tests first.

Related

Why do I need to test Redux implementations while I can just test the DOM using a testing library?

I can't understand the full process of testing. I am using a testing library and I feel comfortable with testing just texts, labels and I feel I don't need to test any implementation in React or Redux, and this is what I read in the React testing library documentation, as Enzyme forced people always to test React implementations.
Now if all texts, labels and displayed values are right in my tests, this means that everything should be OK, but unfortunately when I am using Redux, I am always forced to test Redux implementations (mocking the store, mocking reducers, etc.) and async behaviors like fetching data and so on.
Why do I need and am forced to test Redux implementations as long as I can just test displayed values and passed tests will always indicate that Redux implementations work correctly in my project?
I can't understand the full process of testing
That's understandable. Testing and software quality assurance is a huge professional field (there's more to it than video game playtesting!)
I am using a testing library and I feel comfortable with testing just texts, labels and I feel I don't need to test any implementation in React or Redux
This is the wrong attitude to have. What you're describing is a very non-rigorous, slapdash, haphazard - and most importantly: shallow kind of testing.
Just because data on the user's computer screen appears correct, it doesn't mean it actually is. How do you know you aren't actually interacting with a mocked UI or that the backend database actually contains the new data? Or that there weren't any negative side effects (such as the system deleting everything else - and yes, that happened to me once...).
By analogy, you're saying you'd be happy to fly in an airplane that was tested only by ensuring that moving the control yoke resulted in the on-screen airplane icon moving in the right direction - even if the plane is still firmly on the ground. I would not want to fly on that plane.
Now if all texts, labels, displayed values all are true in my tests, this means that everything should be OK
No, it doesn't. See above.
but unfortunately when I am using Redux, I always forced to test Redux implementations (mocking the store, mocking reducers, etc.) and async behaviors like fetching data and so on.
Yes. You're being forced to the right thing. It's called the Pit of Success. Don't fight it. It's for the best. These libraries, platforms and frameworks are designed by people with more experience in software design than both of us, so if they tell us to do something we should do what they say - if we disagree we need to formalize our objections and duke it out in GitHub issues with academic rigour, not Stack Overflow posts arguing that something's unnecessary because you just don't feel like it. With apologies for being blunt, but I hope you never work in a safety-critical industry or sector until your attitude changes because I never want to see another Therac-25 incident - which was directly caused by people sharing your attitude towards software testing.
Why do I need and am forced to test Redux implementations as long as I can just test displayed values and passed tests will always indicate that Redux implementations work correctly in my project?
Because what you're describing does not provide for anywhere close to full code-coverage.
Here's a bunch of assorted things to consider:
Software testing (and systems testing in general, in any field) can generally be lumped into these categories:
Unit testing: testing a single "unit" of your code in isolation from everything else.
(Side-note: many people are currently abusing unit-testing frameworks like xUnit and MSTest to implement what are actually integration tests, so many people don't understand the real difference between integration and unit tests, which is depressing...). A "unit" would be something like a single class or function, not an entire GUI application.
Your current testing strategy is not a unit test because you aren't testing anything in isolation: you have to fire-up the entire application, including the React/Redux pipeline, web-server and an extremely complicated, multi-billion-dollar GUI web-browser program application.
Generally speaking: "if you need concrete dependencies (instead of fakes or mocks) to test something, it isn't a unit-test, it's an integration-test".
Integration testing: testing multiple components that interact with each other.
This is a rather abstract definition - but it can mean things like testing an application's business logic code when it's coupled to (a copy!) of your production database. This could also include testing a business-layer when there's a GUI attached to it, but GUI testing is not easily automated - so many, but not all, people would not consider what you're doing as a unit-test, especially as what you've described implies that your tests aren't testing for side-effects or verifying other changes of state elsewhere in the system (such as the database or a backend web-service).
There are other types of tests besides unit and integration - but those two are the main types of fully automated tests that every application should have - and every application should have good code-coverage from unit and integration tests especially. Do note that code-coverage does not imply exhaustiveness, and achieving 100% code-coverage is often a waste of time if that code includes trivial implementations like boilerplate code, or parameter validation code, or code generated by a tool that itself is very-well tested.
Generally speaking: if a piece of code "is complicated" or changes regularly, it should have "good" (75%+? 80%? 90%?) code-coverage.
Because testing software via GUIs is very difficult (and brittle: as GUIs are probably the part that experiences major changes the most in any user-facing software system) it's actually often not subject to automated testing anywhere near as much as it should - which is why it's important to ensure good coverage of non-GUI parts with automated testing which also reduces the amount of manual GUI testing that needs doing.
Finally, a big thing to consider with the Redux pattern in particular is that Redux is not specific to GUI applications. Theoretically you should be able to take a Redux application and copy and paste it into a server-side Node.js JavaScript application and hook it up to a virtual DOM and hey-presto, your application no longer requires client-side JavaScript to work! It also means you can get great code-coverage of your application just by using a special virtual DOM that's intended for testing rather than a real browser DOM - but your current approach will not work with this because you're talking about only verifying changes to a real browser DOM, not a virtual DOM.

Writing Angular Unit Tests After the App is Written?

I've inherited a medium sized angular app. It looks pretty well organized and well written but there's no documentation or unit tests implemented.
I'm going to make an effort to write unit tests posthumously and eventually work in e2e tests and documentation via ngdoc.
I'm wondering what the best approach to writing unit tests is after the fact. Would you start with services and factories, then directives etc or some other strategy? I plan on using Jasmine as my testing framework.
I should also mention that I've only been with the code for a few days so I'm not 100% sure how everything ties together yet.
At the end of the day what you need to know is that your software works correctly, consistently, and within business constraints. That's all that matters, and TDD is just a tool to help get you there. So, that's the frame of mind from which I'm approaching this answer.
If there are any known bugs, I'd start there. Cover the current, or intended, functionality with tests and work to fix the bugs as you go.
After that, or if there aren't any currently known bugs, then I'd worry about adding tests as you begin to maintain the code and make changes. Add tests to cover the current, correct functionality to make sure you don't break it in unexpected ways.
In general, writing tests to cover things that appear to be working, just so that you can have test coverage, won't be a good use of time. While it might feel good, the point of tests is to tell you when something is wrong. So, if you already have working code and you never change it, then writing tests to cover it won't make the code any less buggy. Going over the code by hand might uncover as yet undiscovered bugs, but that has nothing to do with TDD.
That doesn't mean that writing tests after the fact should never be done, but exhaustive tests after the fact seems a bit overkill.
And if none of that advice applies to your particular situation, but you want to add tests anyway, then start with the most critical/dangerous parts of the code - the pieces that, if something goes wrong you're going to be especially screwed, and make sure those sections are rock-solid.
I was recently in a similar situation with an AngularJS app. Apparently, one of the main rules of AngularJS development is TDD Always. We didn't learn about this until later though, after development had been ongoing for more than six months.
I did try adding some tests later on, and it was difficult. The most difficult aspect is that your code is less likely to be written in a way that's easily testable. This means a lot of refactoring (or rewriting) is in order.
Obviously, you won't have a lot of time to spend reverse engineering everything to add in tests, so I'd suggest following these guidelines:
Identify the most problematic areas of the application. This is the stuff that always seems to break whenever someone makes a change.
Order this list by importance, so that the most important components are at the top of your list and the lesser items are at the bottom.
Next, order by items that are less complex.
Start adding tests in slowly, from the top of your list.
You want to start with the most important yet least complex so you can get some easy wins. Things with a lot of dependencies may also need to be refactored in the process, so things that are already dependent on less should be easier to refactor and test.
In the end, it seems best to start with unit testing from the beginning, but when that's not possible, I've found this approach to be helpful.

When to mock database access

What I've done many times when testing database calls is setup a database, open a transaction and rollback it at the end. I've even used an in-memory sqlite db that I create and destroy around each test. And this works and is relatively quick.
My question is: Should I mock the database calls, should I use the technique above or should I use both - one for unit test, one for integration tests (which, to me at least, seems double work).
the problem is that if you use your technique of setting up a database, opening transactions and rolling back, your unit tests will rely on database service, connections, transactions, network and such. If you mock this out, there is no dependency to other pieces of code in your application and there are no external factors influencing your unit-test results.
The goal of a unit test is to test the smallest testable piece of code without involving other application logic. This cannot be achieved when using your technique IMO.
Making your code testable by abstracting your data layer, is a good practice. It will make your code more robust and easier to maintain. If you implement a repository pattern, mocking out your database calls is fairly easy.
Also unit-test and integration tests serve different needs. Unit tests are to prove that a piece of code is technically working, and to catch corner-cases.
Integration tests verify the interfaces between components against a software design. Unit-tests alone cannot verify the functionality of a piece of software.
HTH
All I have to add to #Stephane's answer is: it depends on how you fit unit testing into your own personal development practices. If you've got end-to-end integration tests involving a real database which you create and tidy up as needed - provided you've covered all the different paths through your code and the various eventualities which could occur with your users hacking post data, etc. - you're covered from a point of view of your tests telling you if your system is working, which is probably the main reason for having tests.
I would guess though that having each of your tests run through every layer of your system makes test-driven development very difficult. Needing every layer in place and working in order for a test to pass pretty much excludes spending a few minutes writing a test, a few minutes making it pass, and repeating. This means your tests can't guide you in terms of how individual components behave and interact; your tests won't force you to make things loosely-coupled, for example. Also, say you add a new feature and something breaks elsewhere; granular tests which run against components in isolation make tracking down what went wrong much easier.
For these reasons I'd say it's worth the "double work" of creating and maintaining both integration and unit tests, with your DAL mocked or stubbed in the latter.

Is there a framework for running unit tests on Apache C modules?

I am about to make some changes to an existing Apache C module to fix some possible security flaws and general bad practices. However the functionality of the code must remain unchanged (except in cases where its fixing a bug). Standard regression testing stuff seems to be in order. I would like to know if anyone knows of a good way to run some regression unit tests againt the code. I'm thinking something along the lines of using C-Unit but with all the tie ins to the Apache APR and status structures I was wondering if there is a good way to test this. Are there any pre-built frameworks that can be used with C-unit for example?
Thanks
Peter
I've been thinking of answering this for a while, but figured someone else might come up with a better answer, because mine is rather unsatisfactory: no, I'm not aware of any such unit testing framework.
I think your best bet is to try and refactor your C module such that it's dependencies on the httpd code base are contained in a very thin glue layer. I wouldn't worry too much about dependencies on APR, that can easily be linked into your unit test code. It's things like using the request record that you should try to abstract out a little bit.
I'll go as far and suggest that such a refactoring is a good idea if the code is suspected to contain security flaws and bad practices. It's just usually a pretty big job.
What you might also consider is running integration tests rather than unit tests (ideally both), i.e. come up with a set of requests and expected responses from the server, and run a program to compare the actual to expected responses.
So, not the response you've been looking for, and you probably thought of something along this line yourself. But at least I can tell you from experience that if the module can't be replaced with something new for business reasons, then refactoring it for testability will likely pay off in the longer term.
Spent some time looking around the interwebs for you as its a question i was curious myself. came across a wiki article stating that
http://cutest.sourceforge.net/
was used for the apache portable c runtime testing. might be worth checking that out.

Does TDD apply well when developing an UI?

What are your opinions and experiences regarding using TDD when developing an user interface?
I have been pondering about this question for some time now and just can't reach a final decision. We are about to start a Silverlight project, and I checked out the Microsoft Silverlight Unit Test Framework with TDD in mind, but I am not sure how to apply the approach to UI-development in general - or to Silverlight in particular.
EDIT:
The question is about if it is practical thing to use TDD for UI-development, not about how to do separation of concerns.
Trying to test the exact placement of UI components is pointless. First because layout is subjective and should be "tested" by humans. Second, because as the UI changes you'll be constantly rewriting your tests.
Similarly, don't test the GUI components themselves, unless you're writing new components. Trust the framework to do its job.
Instead, you should be testing the behavior that underlies those components: the controllers and models that make up your application. Using TDD in this case drives you toward a separation of concerns, so that your model is truly a data management object and your controller is truly a behavior object, and neither of them are tightly coupled to the UI.
I look at TDD from a UI perspective more from the bare acceptance criteria for the UI to pass. In some circles, this is being labeled as ATDD or Acceptance Test Driven Development.
Biggest over-engineering I have found in using TDD for UIs is when I got all excited about using automated tests to test look and feel issues. My advice: don't! Focus on testing the behavior: this click produces these events, this data is available or displayed (but not how it's displayed). Look and Feel really is the domain of your independent testing team.
The key is to focus your energy on "High Value Add" activities. Automated style tests are more of a debt (keeping them up to date) than a value add.
If you separate you logic from the actual GUI code, you can easily use TDD to build the logic and it will be a lot easier to build another interface on top of your logic as well, if you ever need that.
I can't speak to Microsoft Silverlight, but I never use TDD for any kind of layout issues, just is not worth the time. What works well is using Unit Testing for checking any kind of wiring, validation and UI logic that you implemented. Most systems provide you with programmatic access to the actions the user takes, you can use these to assert that your expectations are correctly met. E.g. calling the click() method on a button should execute what ever code you intended. Selecting an item in a list view changes all the UI elements content to this items properties ...
Based on your edit, here's a little more detail about how we do it on my current team. I'm doing Java with GWT, so the application to Silverlight might be a bit off.
Requirement or bug comes in to the developer. If there is a UI change (L&F) we do a quick mock up of the UI change and send it over to the product owner for approval. While we are waiting on that, we start the TDD process.
We start with at least on of either a Web test (using Selenium to drive user clicks in a browser), or a "headless" functional test using Concordion, FiT or something like it. Once that's done and failing, we have a high level vision of where to attack the underlying services in order to make the system work right.
Next step is to dig down and write some failing unit and integration tests (I think of unit tests as stand-alone, no dependencies, no data, etc. Integration tests are fully wired tests that read/write to the database, etc.)
Then, I make it work from bottom up. Sounds like your TDD background will let you extrapolate the benefits here. Refactor on the way up as well....
I think this blog post by Ayende Rahien answers my question nicely using a pragmatic and sound approach. Here are a few quotes from the post:
Testing UI, for example, is a common
place where it is just not worth the
time and effort.
...
Code quality, flexibility and the
ability to change are other things
that are often attributed to tests.
They certainly help, but they are by
no mean the only (or even the best)
way to approach that.
Tests should only be used when they add value to the project, without becoming the primary focus. I am finally quite certain that using test-driven development for the UI can quickly become the source of much work that is simply not worth it.
Note that it seems the post is mainly about testing AFTER things have been built, not BEFORE (as in TDD) - but I think the following golden rule still applies: The most important things deserve the greatest effort, and less important things deserve less effort. Having a unit-tested UI is often not THAT important, and as Ayende writes, the benefit of using TDD as development model is probably not so great - especially when you think of that developing an UI is normally a top-down process.
GUIs by their very nature are difficult to test, so as Brian Rasmussen suggests keep the dialog box code separate from the the GUI code.
This is the Humble Dialog Box Pattern.
For example, you may have a dialog box where details (e.g. credit card number) are input and you need to verify them. For this case you would put the code that checks the credit card number with the Luhn algorithm into a separate object which you test. (The algorithm in question just tests if the number is plausible - it's designed for checking for transcription errors.)
At my workplace we use TDD and we actually unit test our UI code (for a web application) thanks to Apache Wicket's WicketTester but it's not testing that some descriptive label is before a text field or something like that, instead we test that hierarchy of the components is somewhat correct ("label is in the same subset as text field") and the components are what they're supposed to be ("this label really is a label").
Like others have already said, it's up to the real person to determine how those components are placed on the UI, not the programmer, especially since programmers have a tendency of doing all-in-one/obscure command line tools instead of UIs which are easy to use.
Test-Driven Development lends itself more to developing code than for developing user-interfaces. There are a few different ways in which TDD is performed, but the preferred way of true TDD is to write your tests first, then write the code to pass the tests. This is done iteratively throughout development.
Personally, I'm unsure of how you would go about performing TDD for UI's; however, the team that I am on performs automated simulation tests of our UIs. Basically, we have a suite of simulations that are run every hour on the most recent build of the application. These tests perform common actions and verify that certain elements, phrases, dialogs, etc, etc, properly occur based on, say, a set of use cases.
Of course, this does have its disadvantages. The downside to this is that the simulations are locked into the code representing the case. It leaves little room for variance and is basically saying that it expects the user to do exactly this behavior with respect to this feature.
Some testing is better than no testing, but it could be better.
Yes, you can use TDD with great effect for GUI testing of web apps.
When testing GUI's you typically use stub/fake data that lets you test all the different state changes in your gui. You must separate your business logic from your gui, because in this case you will want to mock out your business logic.
This is really neat for catching those things the testers always forget clicking at; they get test blindness too !

Resources