Why does `npm test` take so long on only one test? - reactjs

After npx create-react-app my-app I run npm test and I get the following:
PASS src/App.test.js
✓ renders learn react link (48ms)
Test Suites: 1 passed, 1 total
Tests: 1 passed, 1 total
Snapshots: 0 total
Time: 2.555s
Ran all test suites.
Watch Usage: Press w to show more.
Why does it take 2.5s to run the entire suite, but only 48ms to run the test?
How can I speed up this process?
Thanks!
UPDATE: I'm running this computer:

The answer is pretty simple: if your test takes 48ms and the entire duration of the suite takes 2.555 seconds, then there is other code that being run that eats up the other 2.507 seconds.
In other words, the code that has to run before and after this one test is significantly more than what you see without looking at the jest library's framework code.
That's why the runtime of the suite is longer (2.5s) than the sum of the tests (48ms).
To make the testing simpler (as well as easier) and more automatic there is a lot of "behind the scenes" code that is allowing you to:
write your tests the way you are, i.e. in a way that is easy to understand/write, but not necessarily how the compiler will actually see it; it will be compiled to something else
find your files that are needed for each test; it takes some time to get all the specified files together
compile the specified files (e.g. using webpack) to what you've specified in your build,
build/compile a testing environment to run your code in, as jest emulates a browser to keep your code similar to what it would run on in production
run a test
then output the results
An analogy to compare it to would be renting a ride-share; you have a minimum cost to just get in the car, but once you are there it is that plus a per-minute or per-mile cost. So, it costs you $2.507 to just rent a bike plus 4.8 cents a mile.
Once your testing environment is made at the beginning, your subsequent tests will not each require 2.5 seconds to run.
Useful links for jest related to time:
Setup and Teardown for Tests
Jest-Webpack setup
Caching Issues with Jest
Jest Architecture
Memory Leakage
Ways that you can speed up your tests is to:
make sure your webpack/babel is configured for jest properly,
you use setup-teardown in an intelligent manner,
(optional/selective) you could run your tests in node rather than jest's default jsdom environment. Although the downside is your React tests will fail, so this only works with Node-only tests (server side).
Lookup how to write efficient tests, as optimized test code make a difference, here is one such article that provides some insight.
Another non-code option is to use a better/faster CPU or use a more efficient operating system.
Your 2.555s with the default create-react-app is multitudes faster compared to my 6.17s on a Windows 10, Asus GL502VMK.
Running it on a Ubuntu 20 Desktop VM (Virtualbox on the same Windows 10 machine mentioned above, 8 out of the 12GB Host's RAM, 4 out of the 8 Host's CPUs), it takes 2.105s.
You can find a lot of articles complaining about Jest and the execution time, so it doesn't seem that this is necessarily just you. There are parts of Jest that are just not fast.

Related

Selenium WebDriver without a Test Runner?

I'm not sure if this question is going to be closed due to it being too novice but I thought I'll give this a shot anyway.
I am currently working on a Selenium Automation framework which, though seemingly well built, is running it's code by spawning threads. (The framework is proprietary so I'm unable to share the code)
This framework instead of using a Test Framework like JUnit or TestNG to run "Tests", uses a threaded approach to run. aka, the methods that read datasheet, instantate and execute the Drivers, report the results etc. them are executed by starting a thread, the class of which is instantiated at various places in the code on runtime.
My concern is: though it runs fine locally with providing the reports and what have you, what it would be unable to do, due to it not operating using a Test Runner, it's unable pass or fail a "Test".
Therefore, on putting this up on a build pipeline, "Test"s wouldn't be executed as there are no "tests" so to speak, thereby making it making it lose it's juice on CI/CD as far as reporting of build pipeline success or failure is concerned.
Am I justified/unjustified in my concerns? Why? And is there a workaround for this? At what ROI?
Resources or links shall be welcomed and beer shall be owed!! :-)
Cheers,
Danesh

Why do Selenium tests that pass locally fail on Browserstack specifying exact same browser?

I got a test that opens a webpage and does scraping.
It works. There's no question on that:
- Works on Phantomjs/Chrome/Firefox when run on my machine everytime.
However, when run on Browserstack (I want to cover 5 most popular browsers, several OS and even mobile devices, for the moment I specify the exact same browser and platform as on my machine to ensure first the test runs properly on Browserstack), the test SOMETIMES passes and SOMETIMES fails with different errors:
- Stale element
- No such element in cache
- Page fails to load after a submit
- etc
And almost never the same element or submit.
Which is making me wonder whether Browserstack has some inherent instability I'm not aware of. Has anyone seen this happen on Browserstack?
Welcome to BS. You get such errors because the environments on BS do lag a lot. They aren't giving much resources to their VMs so you will have to deal with it. Or put a lot of thread sleeps and special waits for your needs

profiling memory leaks in karma-runner/jasmine

I have an AngularJS application with about 2000 unit tests which consume a lot of memory. Once tests launched, they run pretty fast (in Chrome) until memory consumption hits 1.5GB at which point every test starts to take about 3 seconds.
Now, I'm pretty damn sure this is not related to Why are my AngularJS, Karma / Jasmine tests running so slowly?.
At this point I have no idea if it's the tests that are leaking or the application itself. I would like to profile the test execution.
I've read unit-tests karma-runner/jasmine profiling and am trying to do the following:
You can use localhost:9876/debug.html and profile the memory. Check the memory before executing (after Jasmine executed all the describe() blocks and collected the tests) and then after executing the tests - it should be the same.
But how can this be done?
I don't really understand how it is possible to check before and after. Can I somehow pause the tests execution? Is jasmine able to tell me when it's "collected the tests" and wait for me to do the profiling?
Or is there any other approach?
This is not a full answer but just "thinking out loud"...
I would start isolating a suite first.
Then I'd start to have a look at the Chrome Console API - so focus on one browser only for the moment .
Now in each beforeEach or afterEach trigger and shutdown the profiler (using a a suite + test name for each profiling): see the console.profile(\[label\]) and console.profileEnd() calls.
At this point you don't need to stop anything to run the profiling, at the end of the testing you'll have all the results (with labels).
Once found the place where the memory goes up you can focus the area and probably start debugging in a more specific way...
We are seeing similar issues in recent Chrome, though we are using Mocha. Interestingly, you can set a debugger and the memory still climbs... This makes me consider that its not our code or even JS heap size, seems like a browser bug?

Lighthouse (Silverlight Unit Test Runner) hangs then performs no tests -- why?

We are using Lighthouse to run unit tests on Silverlight 4 .xap files.
Regularly, but seemingly randomly, on our build server it does the following:
10:18:08 C:\Program Files (x86)\Jenkins\jobs\******\workspace>Lighthouse.exe "******\Bin\Release\******.xap" "TestResults\******.xml"
10:18:10 Test Results file name: TestResults\******.xml
10:18:10 Sending signal to Lighthouse Test Executor to start executing tests.
10:21:54 Lighthouse v1.01 (c) 2011 - Remote Unit Test Run Started.
10:21:54 Total Test Assemblies: 1 Total Test Methods: 61.
10:21:55 Testing results saved to file: TestResults\******.xml
10:21:55 Total Tests: 61 | Tests Passed: 0. | Tests Failed: 0
10:21:55 Exiting (-1) because no Unit Tests were executed - this can't be right, right?
So it hangs for about 4 minutes, says the run has started, then runs no test and immediately stops.
I cannot find any clue on what is going wrong-- this also occurs when no other build is running in parallel, and on developers' machines the tests are executed fine. (Update: After a reboot of our build server, the first Lighthouse test failed, and from then on all others seem to succeed. This feeds my suspicion that some process is hanging on to some resource which Lighthouse needs.) (Update: For completeness: Without making any changes to the code or the tests, for me Lighthouse sometimes succeeds and sometimes fails. As can be seen from the console output, it is very likely that Lighthouse does not even start any test: "Tests Passed" and "Tests Failed" are both 0.)
Does anyone have any clue where to start looking for a possible cause?
Thanks!!
(I'm not tagging this question with lighthouse to prevent confusion with more well-known tools of the same name.)
To determine if it is an environmental issue or a code issue, checkout your sourcecode from last month, and run lighthouse multiple times and see how many time the failure occurs.
Perhaps some faulty unit test logic has been checked in?

command line support for KIF for running tests on real device

I am using KIF to test my application. I want to start my tests from command line, I looked into the tool WaxSim, looks like it's for running the tests on a simulator. But is there a way to use KIF tests in continous Integration with the real device. It would be helpful if I can invoke the tests from command line which run in a real device.
I know it is possible to do this with apple UI automation on ios5 beta version, but let me know if there is a way to do this in ios4.
Your help will be much appreciated.
-Teja
From the KIF google group
Right now, no, there isn't. Are there any particular device-only needs you have, or is it just on general principle? We're looking in to a way of doing device tests in CI, but it's a tough nut to crack. All of the frameworks for controlling devices are private.

Resources