I've recently come across an interesting/peculiar side effect, wanted to know opinion of the experienced members out here.
When i run the command npm run test -- --testPathPattern="filePath" --coverage, I get the coverage info as follows -
Statements : 37.85% ( 5810/15350 )
Branches : 7.2% ( 547/7596 )
Functions : 10.66% ( 309/2898 )
Lines : 42.1% ( 5751/13660 )
================================================================================
5810 lines being run, 547 branches being tested is huge. I went debugging into this, and realized to an extent why this is happening.
All files in the import tree are are being run !!!
This could potentially why CI takes time, and testing gets heavy.
Can i get any pointers on fixing this please
It's not "all files in the import tree", it's all files period. The --coverage options collects the coverage stats of your codebase with tests, also generates a coverage report, that you probably don't need.
If this is a concern in the CI pipeline don't use the coverage option, just run the tests.
Related
When I test the NTFk package with command Pkg.test("NTFK"). I'm getting the below error.
ERROR: LoadError: Some tests did not pass: 1 passed, 1 failed, 0
errored, 0 broken. in expression starting at
C:\Users\lff19.julia\packages\NTFk\bvyOe\test\runtests.jl:17 ERROR:
Package NTFk errored during testing
For Test.jl, the package scans for ~/PACKAGENAME/test/runtests.jl
A "passed" test is self-explanitory.
A "failed" test means a test resulted in an unexpected value.
An "errored" test means the test was not able to be executed, it errored instead.
A "broken" test refers to a known failing test. Setting the test to "broken" means it will ignore the "fail" status.
So, the 1 failing test is just a single fail in the project's runtest.jl file. It is not a problem with your Pkg.test("NTFK") command, it is a problem within the source code. It should be relatively simple to figure out which test fails from the error/ your console's output.
Realistically, it should be the developer's responsibility to fix the testcase. Although, you could just as well "dev" the package( ] dev PACKAGENAME), effectively making yourself the maintainer for your local package, and going into the runtests.jl and fixing it yourself. Note that "dev"ing a package will move it to ~/.julia/dev .
I have a pre-commit hook set up using jest and the --only-changed flag. However, sometimes my entire test suite will still run (800 tests!) even if I made a change in a single file.
I looked into some other jest flags like
--lastCommit Run all tests affected by file changes in
the last commit made. Behaves similarly to
`--onlyChanged`.
--findRelatedTests Find related tests for a list of source
files that were passed in as arguments.
Useful for pre-commit hook integration to
run the minimal amount of tests necessary.
--changedSince Runs tests related to the changes since the
provided branch. If the current branch has
diverged from the given branch, then only
changes made locally will be tested. Behaves
similarly to `--onlyChanged`. [string]
Yet they all have the same problem. When doing some digging, I learned that
under the hood "If the found file is a test file, Jest runs it, simple enough. If the found file is a source file, call it found-file.js, then any test files that import found-file.js and the test files that import any of the source files that themselves import found-file.js will be run."
I'm working on a project that's relatively new to me. I'm wondering if it's possible for me to get my pre-commit hook to ONLY run the edited test, not all affected tests, or if there is a way for me to track down this tree of "transitive inverse dependencies" and try to solve the problem with different imports or something.
Here is an example of some output from trying --find-related-tests
Test Suites: 2 failed, 309 passed, 311 total
Tests: 2 failed, 803 passed, 805 total
Snapshots: 308 passed, 308 total
Time: 102.366 s
Ran all test suites related to files matching /\/Users\/me\/repo\/project\/src\/modules\/dogs\/components\/spots\/SpotsSpotter.tsx/i.
> #dogsapp/project#1.0.0 test:staged
> jest --findRelatedTests --passWithNoTests "/Users/me/repo/project/src/modules/dogs/components/spots/SpotsSpotter.tsx"
ERROR: "husky:lint-staged" exited with 1.
husky - pre-commit hook exited with code 1 (error)
It's taking WAY too long when I just made a simple change in one file. Anyone know how I can track down why this is happening?
It seems like something similar was addressed here for the --watch flag: https://www.gitmemory.com/issue/facebook/jest/8276/483695303
Resources:
Repository:
https://github.com/anton-lin/tag-expressions-python
Behave Documentation for tags:
https://behave.readthedocs.io/en/latest/tag_expressions.html
package name: :
https://pypi.org/project/cucumber-tag-expressions/
My Feature File
#regression
Feature: showing off behave
#slow
Scenario: run a slow test
Given we have behave installed
When we implement a test
Then behave will test it for us!
#wip
Scenario: run a wip test
Given we have behave installed
When we implement a test
Then behave will test it for us!
#wip #slow
Scenario: run a wip and slow test
Given we have behave installed
When we implement a test
Then behave will test it for us!
Tried commands none of them working: getting results zero scenarios are run
behave --tags="not #slow" .\features\tutorial.feature
behave --tags="#slow and #wip" .\features\tutorial.feature
behave --tags="#slow or #wip" .\features\tutorial.feature
While command with single tag is working fine and executing only scenarios of that specific tag
Getting Below outcome with all three commands:
#regression
Feature: showing off behave # features/tutorial.feature:2
#slow
Scenario: run a slow test # features/tutorial.feature:5
Given we have behave installed # None
When we implement a test # None
Then behave will test it for us! # None
#wip
Scenario: run a wip test # features/tutorial.feature:11
Given we have behave installed # None
When we implement a test # None
Then behave will test it for us! # None
#wip #slow
Scenario: run a wip and slow test # features/tutorial.feature:17
Given we have behave installed # None
When we implement a test # None
Then behave will test it for us! # None
0 features passed, 0 failed, 1 skipped
0 scenarios passed, 0 failed, 3 skipped
0 steps passed, 0 failed, 9 skipped, 0 undefined
Took 0m0.000s
'and'/'or' usage as you have attempted is not available in python-behave 1.2.6.
For 1.2.6 try instead:
OR example:
behave --tags #slow,#wip
AND example:
behave --tags #slow, --tags #wip
Negative tag example:
behave --tags ~#slow
See further docs with
behave --tags-help
The reason for the confusion is that when you install behave, by default you pull the most recent stable version from places like pypi.org (so 1.2.6), while the behave docs refer to the "latest" version (1.2.7, which has been in development for quite some time now).
The same issue was raised and closed a while ago in github and closed.
Tests ignored: 14, passed: 0
Whenever I run any test class I get this type of messages:
Test ignored.
Test method AccountAddressHelperTest.testInvalidBillingCountry was
never reported as completed. Trigger.AllOppLineItemTriggers: line 188,
column 31: Method does not exist or incorrect signature: void
updateMISROnOpportunity(Map<Id,OpportunityLineItem>,
Map<Id,OpportunityLineItem>) from the type OppLineItemHelper
You have compilation failures. What happens if you hit Setup -> Classes -> Compile all?
You or your colleague edited something, probably in file OppLineItemHelper. Or some other file this file reuses. And now not everything compiles, dependencies aren't met. So best thing SF can do is skip these tests.
You can edit like that in sandboxes, SF will not always block you. But compile errors will prevent deployment to prod even before any tests are run
Running chef-solo (Installing Chef Omnibus (12.3)) on centos6.6
My recipe has the following simple code:
package 'cloud-init' do
action :install
end
log 'rpm-qi' do
message `rpm -qi cloud-init`
level :warn
end
log 'yum list' do
message `yum list cloud-init`
level :warn
end
But it outputs the following:
- install version 0.7.5-10.el6.centos.2 of package cloud-init
* log[rpm-qi] action write[2015-07-16T16:46:35+00:00] WARN: package cloud-init is not installed
[2015-07-16T16:46:35+00:00] WARN: Loaded plugins: fastestmirror, presto
Available Packages
cloud-init.x86_64 0.7.5-10.el6.centos.2 extras
I am at a loss as to why rpm/yum and actually rpmquery don't see the package as installed.
EDIT: To clarify I am specifically looking for the following string post package install to then apply a change to the file (I understand this is not a very chef way to do something I am happy to accept suggestions):
rpmquery -l cloud-init | grep 'distros/__init__.py$'
I have found that by using the following:
install_report = shell_out('yum install -y cloud-init').stdout
cloudinit_source = shell_out("rpmquery -l cloud-init | grep 'distros/__init__.py$'").stdout
I can then get the file I am looking for and perform
Chef::Util::FileEdit.new(cloudinit_source.chomp(''))
The file moves based on the distribution but I need to edit that file specifically with in place changes.
Untested code, just to give the idea:
package 'cloud-init' do
action :install
notifies :run,"ruby_block[update_cloud_init]"
end
ruby_block 'update_cloud_init' do
block do
cloudinit_source = shell_out("rpmquery -l cloud-init | grep 'distros/__init__.py$'").stdout
rc = Chef::Util::FileEdit.new(cloudinit_source.chomp(''))
rc.search_file_replace_line(/^what to find$/,
"replacement datas for the line")
rc.write_file
end
end
ruby_block example taken and adapted from here
I would better go using a template to manage the whole file, what I don't understand is why you don't know where it will be at first...
Previous answer
I assume it's a compile vs converge problem. at the time the message is stored (and so your command is executed) the package is not already installed.
Chef run in two phase, compile then converge.
At compile time it build a collection of resources and at converge time it execute code for the resource to get them in the described state.
When your log resource is compiled, the ugly back-ticks are evaluated, at this time there's a package resource in the collection but the resource has not been executed, so the output is correct.
I don't understand what you want to achieve with those log resources at all.
If you want to test your node state after chef-run use a handler maybe calling ServerSpec as in Test-Kitchen.