So I'm trying to get into TDD with React. Everywhere I read about testing in general, it's always emphasised that we only want to test behaviour, not implementation details. Sounds good, but then when going through some examples, I find stuff like this:
expect(onSubmit).toBeCalledTimes(1)
Is this not an implementation detail? Should I really care if a function called "onSubmit" is being called? What if tomorrow it changes to "onFormSubmit"? Or what if the function is empty? The test would still pass...
Shouldn't we test instead if whatever onSubmit is supposed to do, is done? Like check if a DOM element changes or something...
I swear, it's being a real trip trying to grasp the art of writing tests (in general, not just in React).
If you want to avoid testing every implementation detail and only test the things a user cares about, you would need to do an end-to-end test with a tool like Cypress against a real backend and real database. The realism of these tests have a lot of value. But there are downsides as well: they can be slow and less reliable, so it isn't realistic to fully cover all code paths of a large application with them.
As soon as you drop down to component tests, you are already testing some things the user doesn't care about: if you pass props to the component or mock out a backend, the user doesn't care about those props or backend API. But it's a worthwhile tradeoff because it allows you to have faster tests that cover more edge cases.
In component tests I would recommend testing the component's inputs and outputs. If an onSubmit function is defined inside your component's render function, then I would not run an assertion against it; that's definitely an implementationn detail. Instead, I would test the result of that function running. But if onSubmit is passed in as a prop of the function, then I would pass in a mock function and check that it was called, yes. If it's a function defined in another module, things get fuzzier: you can decide whether you want to mock that module or test against the real implementation of that function.
There are always tradeoffs in testing; the above are some of the things you can consider when deciding how to test a specific case.
Related
If I were building a simple function that divided one number by another, I would:
In all cases, call the function. The first test would probably be a happy path, like 10 / 2.
Another test would divide a smaller number by a larger one, resulting in a decimal value.
Some other tests would introduce negative numbers into the mix.
Finally, I would be sure to have one test that divided by zero, to see how that was handled.
I have a series of modals in React that I need to test. They have some commonalities:
They receive a set of props.
They act on those props to display a bunch of elements to the user, pre-populating these elements with some combination of the data in the props.
They use the fireEvent.### functions to simulate what the user would do within the modal.
Near the end of each test that involves a POST or a PATCH, a Submit button is pressed.
After pressing the Submit button, I have two expect functions at the end of each test, like in these examples:
expect(spy).toHaveBeenCalledTimes(1);
expect(spy).toHaveBeenCalledWith({
type: 'SOME_ACTION',
payload: {
property1: 'Some value',
property2: 'Some other value',
property3: 53.2
},
});
This approach entirely makes sense to me because, much like a simple math function, the modal has data coming in and data going out. But some colleagues are insisting that we shouldn't be testing with such expect(spy) functions. Instead they're saying that we should test the end result on the page. But there is no "page" because the modal is being tested in isolation, which I also believe to be a best practice.
Another suggestion was to add a success or failure string to a Mock Service Worker (MSW) response string, like "You've successfully saved the new data". The reason I don't like this approach is because:
Such a string is just an artificial construct in the test, so what does it really prove?
If there are calculations involved with what is sent by the POST or PATCH, don't we want to know whether the correct data is being sent? For example, if the user was prompted to enter the number of hours they worked each day in the past week, wouldn't we want to compare what was entered into the input elements vs. what was sent? And maybe those hours are summed (for some reason) and included in another property that was included - wouldn't we want to confirm that the correct sum was sent?
I do respect the opinions of my colleagues but have yet to hear anything from them that justifies dropping the approach I've employed and adopting one of their alternates. So I'm seeking insight from the community to better understand if this is a practice you would follow ... or not.
This is a case by case scenario with no real answer, as performance and other variables may play a role in making these decisions.
However, based off the information you give both you and your team are right. You would want to validate that the correct data is being sent to the backend yet you would also want to validate that the user is receiving a visual response, such as "Successfully saved data".
Yet if I have to choose one it would be the checking the data as that has the most "code coverage". Checking for the "success" message simply checks if the submit button was pressed where as the data checking assures that most, if not all, data states were correctly set.
I prefer to do both.
But there is no "page" because the modal is being tested in isolation, which I also believe to be a best practice.
I like doing this because for more complex components, TDD with unit tests keep me sane. Sometimes I build out the unit test just to make sure everything is working, then delete it once the integration tests are up. (Because too much unit tests can be a maintenance burden).
Ultimately I prefer integration tests over unit tests because I've experienced situations wherein my unit tests were passing, but once the component's been nested 3 levels deep, it starts to break. Kent Dodds has an excellent article on it.
My favorite takeaway from the article:
It doesn't matter if your component <A /> renders component <B /> with props c and d if component <B /> actually breaks if prop e is not supplied. So while having some unit tests to verify these pieces work in isolation isn't a bad thing, it doesn't do you any good if you don't also verify that they work together properly. And you'll find that by testing that they work together properly, you often don't need to bother testing them in isolation.
Say I have the following AngularJs service:
angular.module("foo")
.service("fooService", function(){
var svc = this;
svc.get = function(id){...};
svc.build = function(id){...};
svc.save = function(thing){...}; //posts, then returns the saved thing
svc.getOrCreate = function(id){
return svc.get(id).then(function(thing){
return thing || svc.build(id).then(function(builtThing){
return svc.save(builtThing);
});
});
}
});
I can unit test the get method by making sure the right API endpoint is being reached, with the right data.
I can test the build method by making sure it pulls data from the proper endpoints/services and builds what it's supposed to.
I can test the save method by making sure the right API endpoint is being reached.
What should I do to test the getOrCreate method? I get two differing opinions on this:
stub the get, build and save methods and verify they're called when appropriate, and with the proper parameters
stub the API endpoints that are being called in get and build, then verify that the endpoints within save are being called with the proper parameters
The first approach is basically saying, "I know these three methods work because they're independently tested. I don't care how they actually work, but I care that they're being called within this method."
The second approach is saying, "I don't care about how this method acts internally, just that the proper API endpoints are being reached"
Which of these approaches is more "correct"? I feel the first approach is less fragile since it's independent of how the get, build and save methods are implemented, but it's not quite right in that it's testing the implementation instead of the behavior. However, option 2 is requiring me to verify the behavior of these other methods in multiple test areas, which seems more fragile, and fragile tests make people hate programming.
This is a common tradeoff I find myself facing quite often with tests... anybody have suggestions on how to handle it?
This is going to come down to a matter of opinion.
If you are unit testing your tests should work on very specific functionality.
If you start chasing promises, and you have promise chaining, where does it stop?
Most importantly, as your unit test scope gets bigger and bigger, there are more things that it depends on (services, APIs etc...), and more ways in which it can brake that may have nothing to do with the "unit". The very thing that you want to make sure works.
Question: If you have a solid controller that works great with your template, and a unit test that ensures your controller is rock solid. Should a twice detached promise that resolves from the response of a web service http API call break your controller test?
On the other hand, the same way you test your API client end points by mocking the service, you can test the service with its own tests using something like Angular's $httpBackend service.
I have seen it done both ways and don't have a strong preference either way. Personally, however, I would consider option 1 where you don't mock the other functions that are tested elsewhere to be integration tests because they're calling multiple publicly visible functions and therefore would prefer option 2.
I'm new to testing, and I understand it's not a good practice to write unit tests that test third-party packages. But I'm curious if the following test would constitute testing the AngularJS framework itself:
describe("TestController", function () {
it("should set up the $scope correctly", function () {
expect($scope.foo).toBe("bar");
});
});
Is it a good idea to test that your $scope got initialized correctly, or since that's AngularJS's job and not yours, is that something you should avoid writing a test for? Sorry if this has an obvious answer, but I want to make sure I'm teaching correct principles when I teach this to my students.
In your example, you are testing the behaviour of TestController, which is a class that you wrote, i.e. you are testing your own code.
You are not testing if Angular can set the state properly if you tell it to (which would indeed be a bit redundant, as it is already covered by Angular tests), you are asserting that your code does the things your application requires it to do (that this involves calling Angular functions is secondary).
So that's a good test to write.
Depending on your application, it may be possible to check the same behaviour in a more "high-level" fashion than asserting what exact value a given state variable has. For some applications that could be considered an implementation detail and not the most appropriate way to validate correct behaviour. You should not be testing internal state, but externally visible behavior. In this case, though, since you are testing a controller, and all a controller does is update the state, it's probably appropriate.
If you find that all you are doing in the controller is unconditionally set state, without any logic involved, then you may not really have a need to test the code at that level of granularity (maybe test bigger units that in combination do something "interesting"). The typical example here is testing setter/getter methods: Yes, there is a chance that you get these one-liners wrong, but they make really boring tests, so you might want to skip those (unless they can be automatically generated).
Now, if this test fails, it could be for three (not mutually exclusive) reasons:
1) your code is broken (either some state setup is missing or you are not doing it right). Detecting this is the main purpose of unit testing.
2) Angular is broken (you set the state properly, but somehow Angular lost it). That is unlikely, but if it does happen, you now have a test case to attach to your bug report to Angular. Note that you did not set out to write a test case for Angular, but you got one "by accident".
3) your code as well as Angular are correct, but your test code is wrong. This happens frequently when you update the code that is being tested and test code also needs to be adjusted because its assumptions have been too narrow, or the expected behaviour has changed and the test is now simply outdated.
I have multiple scenarios in which I'd like to test pretty much the same things.
I am testing a backoffice and I have widgets (like autocomplete search for instance). And I want to make sure that the widget is not broken given that:
I just browse an article page
I saved a part of the article, which reloaded the page
1+2 then I played with some other widgets which have possible side effects
...
My first tought was to add some reusable methods to my WidgetPO (testWidgetStillWorksX ~)
After browsing on the subjet: there's some pros & cons on the subject as said in http://martinfowler.com/bliki/PageObject.html
So how do you handle / where do you put your reusable tests and what are the difficulties/advantages you've had with either methods ?
Your question is too much broad to answer. Best way to write tests using PageObject model is to exclude assertions from the PageObject file. To cut short here's a small explanation -
Difficulties -
Assertions are always a part of the test case/script. So its better to put them in the scripts that you write.
Assertions in PageObject disturbs the modularity and reusability of the code.
Difficulty in writing/extending general functions in pageobject.
Third person would need to go to your pageobject from test script each and everytime to check your assertions.
Advantages -
You can always add methods/functions that do repetitive tasks in your pageObject which performs some operation(like waiting for element to load, getting text of an element, etc...) other than assertions and returns a value.
Call the functions of PageObject from your tests and use the returned value from them to perform assertions in your tests.
Assertions in test scripts are easy to read and understand without a need to worry about the implementation of pageobjects.
Here's a good article of pageobjects. Hope this helps.
I have a header file which contains declaration of struct and some methods, and a 'C' file which defines(implements) the struct and the methods. Now while writing Unit Test Cases, I need to check if some struct variable(which do not have getter methods) are modified.Since the struct's definition is contained in the C file, should the unit test cases be based on header file or C file ?
Test the interface of the component available to other parts of the system, the header in your case, and not the implementation details of the interface.
Unit tests make assertions about the behavior of a component but shouldn't depend on how that behavior is implemented. The tests describe what the component does, not how it is done. If you change your implementation but preserve the same behavior your tests should still pass.
If instead your tests depend on a specific implementation they will be brittle. Changing the implementation will require changing the tests. Not only is this extra work but it voids the reassurance the tests should offer. If you could run the existing test against the new implementation you would have some confidence that the new implementation has not changed behavior other components might depend on. Once you have to change the test to be able to run it against a new implementation you must carefully consider if you have changed any of the test's expectations in the process.
It may be important to test behavior not accessible using the public interface of this component. Consider that a good hint that this interface may not be well designed. TDD encourages a "test first" approach and this is one of the reasons why. If you start by defining the assertions you want to make about a component's behavior you must then design an interface which exposes that behavior. That is what makes the process "test driven".
If you must write tests after the component they are testing then at least try to use this opportunity to re-evaluate the design and learn from your test. Either this behavior is an implementation detail and not worth testing or else the interface should be updated to expose it (this may be more complicated than just making it public as it should also be safe and reasonable for other components of the system to access this now public attribute).
I would suggest that all testing, other than ephemeral stuff performed by developers during the development process, should be testing the API rather than the internals. That is, after all, the specification you're required to meet.
So, if you maintain stuff that's not visible to the outside world, that itself is not an aspect that needs direct testing. Instead, assuming something wrong there would affect the outside world, it's that effect you should test.
Encapsulation means being free to totally change the underlying implementation without the outside world being affected and you'll find that, if you code your tests based on the external view, no changes will be needed if you make such a change.
So, for example, if your unit is an address book, the things you should be testing are along the lines of:
can we insert an entry?
can we delete it?
can we retrieve it?
can we insert a hundred entries and query them in random order?
It's not things like:
do the structure data fields get created properly?
is the linked list internally consistent?