We have used React JS for front end and we need to write end to end tests. After researching online, we came across 2 options :
1. Selenium WebDriver
2. React Test Utils (https://reactjs.org/docs/test-utils.html)
What I understood was that with React Test Utils you can simulate clicks and check the status of the HTML elements by using methods like findRenderedDOMComponentWithXXX and that you can run these tests from command line so they will be faster.
Selenium does the same thing but from within the browser, it will allow you to write test in Behavior Driven Development Style(making it more readable)
My confusion :
Can we use React Test Utils to test a complete web page (complex component) or it is better to only test simple custom made components.
For example: If we have a component like Tasks which allows you to
add tasks, remove tasks, change priority which uses components like Input, DropDown and Toggle.
So is it a good idea to use React Test Utils for the entire Tasks component or we should use it for smaller inidividual components like Input, DropDown, Toggle
To test the complete Tasks component write end to end tests using Selenium.
Some other points :
simulate method in React Test Utils requires to pass event data which can increase small amount of work.
It will be great if some one can help me understand the difference between two.
You can use Jest and Selenium together. Don’t limit yourself to just one. You probably will go even beyond those two when you do Test Driven Development, or even Behavior Driven Development, based on what specifically you need to test. Jest by itself can’t really simulate UAT in my opinion. You need to open the browser to fully simulate the user experience, but those browser tests could be a very small percentage of tests relative to all of your tests. The reason why so many people have moved away from selenium is the bulkyness of configuration and maintenance of it. Also speed and reliability.
There are newer tools coming to life for browser based testing in recent years:
Puppeteer
https://github.com/GoogleChrome/puppeteer
Nightwatch.js
http://nightwatchjs.org
Also, your specific programming language offers additional features and enhancements to testing. Many Java developers use Maven and TestNG with their testing to build and debug their tests. Ruby developers might use RSpec. Whatever test runner you use, it might incorporate several dependencies on the backend for all kinds of testing tools for linting, html proofing, functionality, DevSecOps, database migrations, or even spellcheck. The key is to group all the tests you think you will need together in a single test runner (preferably, or as few as possible). Then run them all, back to back in a CI/CD pipeline; like Jenkins prior to deployment of changes. If any one fails, then your build fails in real time.
If you want to run several API tests in the pipeline, Newman is looking pretty decent to incorporate Postman tests you may already have laying around into a CI/CD pipeline. Of course alternatively, you could also build http clients, but not everyone has a coding skill set, so the tools you pick first should compliment the skillsets you have available to you as well.
Smart Bear also has several tools now that will run in a CI/CD pipeline for SOAP and UI testing, but those are much more expensive than the open source alternatives.
Related
I'm using create-react-app to build my app. Do I need to do any custom testing or is it all handled by create-react-app?
Also, do the errors and prompts I receive in my terminal and console cover all testing or is there something else I need to do before releasing an app for production.
Thanks
I'm not exactly sure what you mean by testing…
If you mean unit tests, then no. create-react-app can't automatically test your application for you. It has no idea what you're trying to acomplish with your application, so it can't test it.
Or are you talking about build warnings and errors? In that case create-react-app will tell you what it is able to gather, which should be quite a bit, but it can't find possible runtime errors.
In any case if you want your whole applications functionality covered by tests, you need to write those yourself. If you're not familiar with testing in general you may want to have a look at one or more react unit testing articles (https://felixgerschau.com/unit-testing-react-introduction/ could be starting point) and depending on what you want looking into e2e tests could also be worthwhile.
On our web application we are using protractor to test real user experiences and while it accurately tests the user flow they can be quite flaky for a multitude of reasons that could be out of our control. As a result it is hard to rely on the test results because the failures could be noise.
Is there a way to run just the flaky tests? I've tried to use the protractor-flakes but it doesn't seem to work when running in parallel.
Yes, there are ways to re-run flakey tests, but you will need to use a library/plugin outside of Protractor. It doesn't look like this functionality will be available in Protractor any time soon.
I use an node module called protractor-errors. This plugin will record when a test fails and allow you to re-run only failed tests. It supports running sharded tests in parallel. The catch is that it currently only supports tests written in Jasmine.
I'm new in tests area. Regression team where I belong has built GUI tests for some web applications with complex business logic that developers team has produced.
Until now, we have been using Selenium IDE to build regression tests (record, edit, parameterize, debug and playback). Tests are exported and maintained in Html format. We used to have a tool to manage tests and iterations (store html scripts/tests suites, run tests in batch mode, run tests in background, get detailed test result reports), which is now deprecated because uses Selenium RC. Additionally, tests are made only in Firefox, but our clients are mainly IE users.
So, we have some important and strategic decisions to make. We need urgently to start testing in IE and a new way to do the tasks we were doing.
An attempt was made to change the code of tests’ manager tool in order to work with Selenium Webdriver. It was tried to code tests in Ruby from the beginning, since Selenium IDE export to Ruby was not satisfactory. We figured out that huge changes and subsequent tests on the manager tool were needed. It would also involve programming the methods and test them.
Our regression team is quite small and we don’t want to focus too much on the programming task itself, but more on testing our webApps. Additionally, no one on the general team had experience in working with Ruby before.
Can you help us with some suggestions about the route we should take?
Is there an integrated solution easy to work with (as Selenium IDE) and able to do the manager tasks of our old tool without taking us much time on “hard coding”?
Is there any reliable open source tool that could do it? And a commercial solution?
We currently run and deploy on app engine, but use GitHub as version control. What is the best way to run a series of tests every time we push to GitHub, both client-side Javascript tests, using something like PhantomJS as well as something like NoseTests for Python?
The reason being that client side code is in Javascript while the server side code is in Python.
And since we have existing credits, we'd prefer not to go for a 3rd part hosted solution. App Engine also provides a pipeline for just node tests, but this doesn't cover the Javascript unit tests.
Thanks!
I believe github commit webhooks are what you are looking for. I have not personally set them up, but at my day job, we have it automatically run a handful of things including builds + tests.
https://help.github.com/articles/about-webhooks/
There is a Google Scrip to accurately test the backend loads. Unfortunately I don't know of anything from JS.
In the documentation for App Engine and in presentations we've given at Google I/O we have mentioned that you should ramp up slowly when load testing an application on App Engine. Ramping up too quickly won't give an accurate picture of how App Engine scales; you have to accomodate our load balancing code which determines how many instances of your application to spin up by watching how much traffic is directed to your application. That monitoring and adjustment takes time, therefore the need for not ramping up too quickly.
I've looked at various load testing tools and in the end wrote my own short script in Python which I use as a base for all my load testing. This isn't to say that what I have here is better for load testing than the available packages, please look at them and judge them against your own criteria. I'm most comfortable with Python and a skeleton script that can be tweaked for each test scenario is optimal for me.
There are ton of questions asking how to mock http responses in protractor tests. How to do this is not the question, should we do this is the question.
http://en.wikipedia.org/wiki/Test_fixture#Software
I've been a QA Engineer for over 4 years, and most of my automated test experience deals with both low level (unit) tests of controllers, models, etc and high level (integration) tests of full systems. In my ruby world experience, we used Capybara for integration tests along with blueprint and factorygirl (for different projects) to create mock database entries. This was our integration/E2E testing.
I've only recently moved to a javascript team using AngularJS. The original built-in testing framework (now deprecated) had a mock Backend module which seemed suitable for our needs. Protractor is now the standard. Only after protractor gained steamed, have I heard the backlash of using fixtures for E2E testing. Many posts are pointing out that E2E testing should be testing the full stack, so any backends should not be mocked and be accessible.
Should integration tests use fixtures, and why?
There is a vocabulary problem here. What is called "e2e" testing in the Angular world has nothing to do with end-to-end testing. It is an end-to-end of the UI part only, which means no e2e test at all. It is UI testing.
Gojko Adzic, in "spec by example" book, recommands to do functional, fixture-based testing "below the skin of the application", i.e. without the UI part.
To answer your question :
-Should UI tests have fixture? No, use mocks or stubs
-Should Backend tests have fixture ? Yes
You are asking 2 questions - about the e2e tests and the integration tests. :)
The e2e test, at least in Angular's world, is testing your complete application as a real user can interact with it. This includes testing your backend request and response. However, if that runs slow and requires resources, it makes perfect sense to switch to a smaller (or even fake) version of your backend for testing.
The integration test is about a part of your code, and unit test is about individual units. Both times some or all dependencies can be mocked to isolate the tests.
So in all cases using fixtures or mocks can be useful.
See my answer here for more detailed discussion of use cases, advantages and limitations of Karma and Protractor.
Yes we use ngMockE2E to mock the backend we then expose some helpers to window object so we can seed various mock data states. We also use sinon to force a specific time for testing date sensative UI so all new Date() calls returns what you want
I'm facing the same issue here with a personal code project. I'm using the MEAN stack, and my solution will be to:
use Grunt to run the tests.
before starting the Node server, use the mongoose fixtures to set up the Mongodb test db (https://github.com/powmedia/mongoose-fixtures)
start the Node server with a parameter to make it use the test db.
You could do something like this approach if on a different stack, although Grunt is very helpful as a general job runner.