Angular Unit/E2E testing with protractor and jasmine - angularjs

I am writing an angular application, whereby my controller calls an API, that returns live data which I then display on my html doc.
I am using Protractor for my end to end tests, and jasmine for unit testing.
I am mocking my API call, to ensure the API is not called.
My question is whether I should be testing the API call with protractor, and check whether my html doc is updated following the GET request, or whether I should test the API call when conducting my unit tests with jasmine.
I have a feeling that the answer is that I should be testing this API call with both my unit and end to end tests, but am hoping someone on SO can provide clarity.

The main goal of unit testing is to test that your code (be it JavaScrip or otherwise) is doing what it should. Each test should be done against data that static or contrived and should never be run against an API. Static data gives you the control you need. If your code needs to branch when X equals 7, you can purposely set that value and verify that your code does indeed branch. When you run against an API you do not have that control. Even if you are the one that controls the API, doing unit testing against it is a bad habit to get into.
End to end testing is completely different. Here we are not testing that the code works on a granular level (we already did that in our unit tests) we are testing that the application works as a whole. When a specific button is clicked in the application, did the expected things happen? Do all of the expected elements appear on the page? You still need to be testing against known data, and doing that is just as crucial as in unit testing, but here you get to see how your app reacts against when running. Did a particular screen take too long to load? Did a button click not give you what you expected? This kind of testing lets you click through your application as a user would (except much faster.)
You should run both kinds of tests on your app. Unit tests should be run during the build process, and end to end tests should be run once the build completes.

Related

Visualization of unit tests

I recently tried TDD methodology and i really liked it. You can write some tests for specified unit, imitate different behavior, data and mock object that allows you to check only small piece of code without need of running entire application. But I have some questions about unit visualization.
Suppose we have a simple chat application with homepage, lobby and chat widget components (p. 1).
When you are working on chat widget component (for example), you can write unit test for it and don't care about other components. But what if want to see widget render results? It is so annoying to run entire application, go to lobby page, switch to chat widget tab every time I changed my code.
Are there are any practices to run render unit tests? Does it depend on technology stack?
My frontend stack: React, Redux, Jest + React testing library.
If a test shows you rendered content, than it is not a unit test. The result of a unit test must be binary (failure or success). If you have to look at test output to figure out if it was successful, it is not a unit test.
What you are looking for it not unit tests but UI test. For the Web context selenium comes to mind. It is used the define scenarios for poking at your UI and asserting on outcomes. You can also use it to automate the process of
"run entire application, go to lobby page, switch to chat widget tab every time I changed my code".

Unit testing - ignoring module run block

Is there a way to prevent an application run block from executing during unit tests?
My situation is that I have added some session checking logic to the run block, which redirects to a login page should session checks fail.
Now that I have added this run block in, all my other tests fail since they expect the login page to have been requested as I'm not ensuring the session check returns true before each test.
So is there a way to skip the run block for a unit test, or would it be something like mocking out the module in my tests so it doesn't have the run block included?
I'm probably thinking about this the wrong way, so please enlighten me!
Thanks
No, run block is part of angular app lifecycle.
I would suggest not to have this logic in .run, but transfer all authentication logic into some service. After that its easy to mock it.
If you can be more specific about app architecture, I can suggest more improvements.

How to re-use groups of tests in Protractor

I'm working on a wizard-style application using AngularJS, and using ProtractorJS for E2E testing. I've defined PageObjects representing various screens and objects on the page.
In writing e2e tests for the app, I've noticed that several of the tests move through several wizard pages in the same way, and I'd like to define reusable sets of tests that I could invoke as part of larger test suites. I've attempted to do this by creating functions which have a describe(... it (...); it(...)) sequence inside but my issue is passing a reference to the new PageObject out of the function, since the test suites are executed after the function has already returned.

protractorjs e2e test, fake/set time of day

Say I have a time sensitive app, such as a football game betting app, and I can not best after the game has started. Now I want some angularjs, protractorjs e2e tests for the app that run on something like Jenkins. The issue is I don't know when the app is run as that is based on people updated the repo'. So how can I fake or set the time, or an I looking at this wrong?
I had looked all online but can't see a say of getting protractorjs to set the time of day for the tests.
I am not sure if protractor can change the time on your computer or anything, but there are still some ways to work with it.
You should probably be looking into a mock mode version of your website that your protractor tests can hit. Using ngMockE2E (https://docs.angularjs.org/api/ngMockE2E) and other mock tools, you can, for example, mock the service that gets the time for your website. You should have an angular service that checks for the time of day and then feeds that data through to your site to enable or disable betting. You can mock the service and feed it different sets of time, maybe attach those to different routes on your mock mode, and have your protractor tests hit that.
Let me know if this sounds like what you need, and we can get more into it.

Asserting $http request payloads without mocking them

I'm writing Protractor tests to verify the successful creation of reports in our application. A report is created via a series of complex UI interactions within a dialog and saved via an AJAX POST request to a REST API.
I've written tests for the complex UI interactions within the modal, but I'm at a loss for how to test the POST request within the same Protractor tests. Ideally, I'd like to be able to make assertions against the payload of the POST request to verify that the UI is sending the correct data to the API.
It's important to note that I do not want to mock the HTTP call--I need it to go through, since subsequent protractor tests navigate to the report and perform additional checks. My first thought was to somehow hook into the $httpBackend.passThrough() method, but I haven't had any success with that.
Any ideas how to accomplish this?
since subsequent protractor tests navigate to the report and perform additional checks
If you check that the report contains data to that matches which was submitted, you are, albeit indirectly, testing that the POST went through successfully. There is a reasonable argument that this is enough for the E2E test: it tests that the application behaves as he user would want. The user doesn't care how it's achieved: POST, websockets, carrier pigeon ;-)
Keep in mind that the usual aim of such tests is for them to fail if something is broken. If the POST isn't done correctly, then the subsequent tests that verify the displayed report would fail.
The downside is that you would have a bit less information about what has gone wrong than if you managed to test the POST as well. However, unit tests can help. If you have a failing unit test that localises the issue, you write a fix that makes it pass. If you don't have a failing unit test, you can investigate the issue by debugging, add a failing test that highlights the issue, and fix the code so it passes.

Resources