tape/jest not recognizing test case - reactjs

Created an app using create-react-app.
I set up a test file with the suggested convention. But when I run npm test
seems, my test case is not recognized.
FAIL src\_tests_\saga.test.js
? Test suite failed to run
Your test suite must contain at least one test.
at onResult (node_modules\jest\node_modules\jest-cli\build\TestRunner.js:1
89:18)
at process._tickCallback (internal\process\next_tick.js:103:7)
Test Suites: 1 failed, 1 total
Tests: 0 total
Snapshots: 0 total
Time: 1.941s
Ran all test suites.
Watch Usage
> Press o to only run tests related to changed files.
> Press p to filter by a filename regex pattern.
> Press q to quit watch mode.
> Press Enter to trigger a test run.
TAP version 13
# testflow Saga test
ok 1 take testflow
ok 2 test state should be false
ok 3 Must call bye

Related

unit testing with jest keep on running in looper CI

Running the unit test with jest in looper for a react project. The unit test job is triggered in the looper, all the unit tests were run and getting the coverage info in the logs. Got the message - "Ran all test suites" but still looper is not exiting from the unit test job. it keeps on running. How to exit from the job and kickstart the next job say sonar coverage.
Please find the logs from CI (looper)
14:59:03 Test Suites: 10 failed, 23 passed, 33 total
14:59:03 Tests: 29 failed, 90 passed, 119 total
14:59:03 Snapshots: 2 failed, 2 total
14:59:03 Time: 39.887s
14:59:03 Ran all test suites.
and getting stuck here and getting timed out.
Tried --forceExit, --openHandler etc nothing is worked out.

How do I fix the package testing error Julia?

When I test the NTFk package with command Pkg.test("NTFK"). I'm getting the below error.
ERROR: LoadError: Some tests did not pass: 1 passed, 1 failed, 0
errored, 0 broken. in expression starting at
C:\Users\lff19.julia\packages\NTFk\bvyOe\test\runtests.jl:17 ERROR:
Package NTFk errored during testing
For Test.jl, the package scans for ~/PACKAGENAME/test/runtests.jl
A "passed" test is self-explanitory.
A "failed" test means a test resulted in an unexpected value.
An "errored" test means the test was not able to be executed, it errored instead.
A "broken" test refers to a known failing test. Setting the test to "broken" means it will ignore the "fail" status.
So, the 1 failing test is just a single fail in the project's runtest.jl file. It is not a problem with your Pkg.test("NTFK") command, it is a problem within the source code. It should be relatively simple to figure out which test fails from the error/ your console's output.
Realistically, it should be the developer's responsibility to fix the testcase. Although, you could just as well "dev" the package( ] dev PACKAGENAME), effectively making yourself the maintainer for your local package, and going into the runtests.jl and fixing it yourself. Note that "dev"ing a package will move it to ~/.julia/dev .

Jest should only run on changed files for pre-commit hook

I have a pre-commit hook set up using jest and the --only-changed flag. However, sometimes my entire test suite will still run (800 tests!) even if I made a change in a single file.
I looked into some other jest flags like
--lastCommit Run all tests affected by file changes in
the last commit made. Behaves similarly to
`--onlyChanged`.
--findRelatedTests Find related tests for a list of source
files that were passed in as arguments.
Useful for pre-commit hook integration to
run the minimal amount of tests necessary.
--changedSince Runs tests related to the changes since the
provided branch. If the current branch has
diverged from the given branch, then only
changes made locally will be tested. Behaves
similarly to `--onlyChanged`. [string]
Yet they all have the same problem. When doing some digging, I learned that
under the hood "If the found file is a test file, Jest runs it, simple enough. If the found file is a source file, call it found-file.js, then any test files that import found-file.js and the test files that import any of the source files that themselves import found-file.js will be run."
I'm working on a project that's relatively new to me. I'm wondering if it's possible for me to get my pre-commit hook to ONLY run the edited test, not all affected tests, or if there is a way for me to track down this tree of "transitive inverse dependencies" and try to solve the problem with different imports or something.
Here is an example of some output from trying --find-related-tests
Test Suites: 2 failed, 309 passed, 311 total
Tests: 2 failed, 803 passed, 805 total
Snapshots: 308 passed, 308 total
Time: 102.366 s
Ran all test suites related to files matching /\/Users\/me\/repo\/project\/src\/modules\/dogs\/components\/spots\/SpotsSpotter.tsx/i.
> #dogsapp/project#1.0.0 test:staged
> jest --findRelatedTests --passWithNoTests "/Users/me/repo/project/src/modules/dogs/components/spots/SpotsSpotter.tsx"
ERROR: "husky:lint-staged" exited with 1.
husky - pre-commit hook exited with code 1 (error)
It's taking WAY too long when I just made a simple change in one file. Anyone know how I can track down why this is happening?
It seems like something similar was addressed here for the --watch flag: https://www.gitmemory.com/issue/facebook/jest/8276/483695303

Even though have installed "cucumber-tag-expressions 3.0.0" the behave command with "and" and "or" operator are not working

Resources:
Repository:
https://github.com/anton-lin/tag-expressions-python
Behave Documentation for tags:
https://behave.readthedocs.io/en/latest/tag_expressions.html
package name: :
https://pypi.org/project/cucumber-tag-expressions/
My Feature File
#regression
Feature: showing off behave
#slow
Scenario: run a slow test
Given we have behave installed
When we implement a test
Then behave will test it for us!
#wip
Scenario: run a wip test
Given we have behave installed
When we implement a test
Then behave will test it for us!
#wip #slow
Scenario: run a wip and slow test
Given we have behave installed
When we implement a test
Then behave will test it for us!
Tried commands none of them working: getting results zero scenarios are run
behave --tags="not #slow" .\features\tutorial.feature
behave --tags="#slow and #wip" .\features\tutorial.feature
behave --tags="#slow or #wip" .\features\tutorial.feature
While command with single tag is working fine and executing only scenarios of that specific tag
Getting Below outcome with all three commands:
#regression
Feature: showing off behave # features/tutorial.feature:2
#slow
Scenario: run a slow test # features/tutorial.feature:5
Given we have behave installed # None
When we implement a test # None
Then behave will test it for us! # None
#wip
Scenario: run a wip test # features/tutorial.feature:11
Given we have behave installed # None
When we implement a test # None
Then behave will test it for us! # None
#wip #slow
Scenario: run a wip and slow test # features/tutorial.feature:17
Given we have behave installed # None
When we implement a test # None
Then behave will test it for us! # None
0 features passed, 0 failed, 1 skipped
0 scenarios passed, 0 failed, 3 skipped
0 steps passed, 0 failed, 9 skipped, 0 undefined
Took 0m0.000s
'and'/'or' usage as you have attempted is not available in python-behave 1.2.6.
For 1.2.6 try instead:
OR example:
behave --tags #slow,#wip
AND example:
behave --tags #slow, --tags #wip
Negative tag example:
behave --tags ~#slow
See further docs with
behave --tags-help
The reason for the confusion is that when you install behave, by default you pull the most recent stable version from places like pypi.org (so 1.2.6), while the behave docs refer to the "latest" version (1.2.7, which has been in development for quite some time now).
The same issue was raised and closed a while ago in github and closed.

How to get a code coverage report with react-create-app?

If I run
npm test --coverage
the tests pass, but the coverage is not run.
When I change package.json to have
react-scripts test --coverage
Then, when I do npm test and a it runs the tests but not the coverage
The tests run along with the coverage report but the coverage shows all zeroes for coverage
PASS src/components/Header/SubHeader.test.js
The <SubHeader /> component
✓ should render (4ms)
✓ should render with a goback button (1ms)
✓ should render
----------|----------|----------|----------|----------|-------------------|
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s |
----------|----------|----------|----------|----------|-------------------|
All files | 0 | 0 | 0 | 0 | |
----------|----------|----------|----------|----------|-------------------|
Test Suites: 1 passed, 1 total
Tests: 3 passed, 3 total
Snapshots: 3 passed, 3 total
Time: 1.077s
Ran all test suites.
Finally, I realized that I can do
npm run test .
Question: Why don't the other methods work?
After revisiting this question i figured out that if you run this you will trigger coverage command and get all results.
SOLUTION 1
npm run test -- --coverage .
When you are passing arguments to npm script, you need to add -- before adding arguments like --coverage. This is just how npm works. See this answer for passing arguments to npm
SOLUTION 2
yarn test --coverage .
EDIT:
I asked a question regarding . being passed down the command and answer is pretty much simple. . is passed because it is a positional parameter, in contrast to --coverage that is option for npm test command: Why is . passed as argument to npm without -- delimiter?
---- Problem analysis ----
PROBLEM 1
npm test --coverage you are not passing --coverage argument to test script.
In the other case when you edit script to be:
react-scripts test --coverage and run npm test here you actually add argument --coverage, but in script itself, not from npm command
PROBLEM 2
When you do npm run test . you are passing . that means you want to include all files in this command.
For some reason . is passed to the npm run test script but not --coverage.
On the other side yarn passes all arguments without --.

Resources