How do I fix the package testing error Julia? - package

When I test the NTFk package with command Pkg.test("NTFK"). I'm getting the below error.
ERROR: LoadError: Some tests did not pass: 1 passed, 1 failed, 0
errored, 0 broken. in expression starting at
C:\Users\lff19.julia\packages\NTFk\bvyOe\test\runtests.jl:17 ERROR:
Package NTFk errored during testing

For Test.jl, the package scans for ~/PACKAGENAME/test/runtests.jl
A "passed" test is self-explanitory.
A "failed" test means a test resulted in an unexpected value.
An "errored" test means the test was not able to be executed, it errored instead.
A "broken" test refers to a known failing test. Setting the test to "broken" means it will ignore the "fail" status.
So, the 1 failing test is just a single fail in the project's runtest.jl file. It is not a problem with your Pkg.test("NTFK") command, it is a problem within the source code. It should be relatively simple to figure out which test fails from the error/ your console's output.
Realistically, it should be the developer's responsibility to fix the testcase. Although, you could just as well "dev" the package( ] dev PACKAGENAME), effectively making yourself the maintainer for your local package, and going into the runtests.jl and fixing it yourself. Note that "dev"ing a package will move it to ~/.julia/dev .

Related

Jest should only run on changed files for pre-commit hook

I have a pre-commit hook set up using jest and the --only-changed flag. However, sometimes my entire test suite will still run (800 tests!) even if I made a change in a single file.
I looked into some other jest flags like
--lastCommit Run all tests affected by file changes in
the last commit made. Behaves similarly to
`--onlyChanged`.
--findRelatedTests Find related tests for a list of source
files that were passed in as arguments.
Useful for pre-commit hook integration to
run the minimal amount of tests necessary.
--changedSince Runs tests related to the changes since the
provided branch. If the current branch has
diverged from the given branch, then only
changes made locally will be tested. Behaves
similarly to `--onlyChanged`. [string]
Yet they all have the same problem. When doing some digging, I learned that
under the hood "If the found file is a test file, Jest runs it, simple enough. If the found file is a source file, call it found-file.js, then any test files that import found-file.js and the test files that import any of the source files that themselves import found-file.js will be run."
I'm working on a project that's relatively new to me. I'm wondering if it's possible for me to get my pre-commit hook to ONLY run the edited test, not all affected tests, or if there is a way for me to track down this tree of "transitive inverse dependencies" and try to solve the problem with different imports or something.
Here is an example of some output from trying --find-related-tests
Test Suites: 2 failed, 309 passed, 311 total
Tests: 2 failed, 803 passed, 805 total
Snapshots: 308 passed, 308 total
Time: 102.366 s
Ran all test suites related to files matching /\/Users\/me\/repo\/project\/src\/modules\/dogs\/components\/spots\/SpotsSpotter.tsx/i.
> #dogsapp/project#1.0.0 test:staged
> jest --findRelatedTests --passWithNoTests "/Users/me/repo/project/src/modules/dogs/components/spots/SpotsSpotter.tsx"
ERROR: "husky:lint-staged" exited with 1.
husky - pre-commit hook exited with code 1 (error)
It's taking WAY too long when I just made a simple change in one file. Anyone know how I can track down why this is happening?
It seems like something similar was addressed here for the --watch flag: https://www.gitmemory.com/issue/facebook/jest/8276/483695303

Even though have installed "cucumber-tag-expressions 3.0.0" the behave command with "and" and "or" operator are not working

Resources:
Repository:
https://github.com/anton-lin/tag-expressions-python
Behave Documentation for tags:
https://behave.readthedocs.io/en/latest/tag_expressions.html
package name: :
https://pypi.org/project/cucumber-tag-expressions/
My Feature File
#regression
Feature: showing off behave
#slow
Scenario: run a slow test
Given we have behave installed
When we implement a test
Then behave will test it for us!
#wip
Scenario: run a wip test
Given we have behave installed
When we implement a test
Then behave will test it for us!
#wip #slow
Scenario: run a wip and slow test
Given we have behave installed
When we implement a test
Then behave will test it for us!
Tried commands none of them working: getting results zero scenarios are run
behave --tags="not #slow" .\features\tutorial.feature
behave --tags="#slow and #wip" .\features\tutorial.feature
behave --tags="#slow or #wip" .\features\tutorial.feature
While command with single tag is working fine and executing only scenarios of that specific tag
Getting Below outcome with all three commands:
#regression
Feature: showing off behave # features/tutorial.feature:2
#slow
Scenario: run a slow test # features/tutorial.feature:5
Given we have behave installed # None
When we implement a test # None
Then behave will test it for us! # None
#wip
Scenario: run a wip test # features/tutorial.feature:11
Given we have behave installed # None
When we implement a test # None
Then behave will test it for us! # None
#wip #slow
Scenario: run a wip and slow test # features/tutorial.feature:17
Given we have behave installed # None
When we implement a test # None
Then behave will test it for us! # None
0 features passed, 0 failed, 1 skipped
0 scenarios passed, 0 failed, 3 skipped
0 steps passed, 0 failed, 9 skipped, 0 undefined
Took 0m0.000s
'and'/'or' usage as you have attempted is not available in python-behave 1.2.6.
For 1.2.6 try instead:
OR example:
behave --tags #slow,#wip
AND example:
behave --tags #slow, --tags #wip
Negative tag example:
behave --tags ~#slow
See further docs with
behave --tags-help
The reason for the confusion is that when you install behave, by default you pull the most recent stable version from places like pypi.org (so 1.2.6), while the behave docs refer to the "latest" version (1.2.7, which has been in development for quite some time now).
The same issue was raised and closed a while ago in github and closed.

How to abort a macports portfile on an error condition?

I working on a version bump on the cc65 and encountered a problem with the linuxdoc-tools. Since I can't fix the linuxdoc-tools and there is a simple workaround possible I decided to add an if statement to inform the user together with the workaround:
if {! [file exists ${prefix}/bin/perl] } {
ui_error "
«${prefix}/bin/perl» is missing but the linuxdoc-tools depends on it.
Please create an appropriate symbolic link for linuxdoc-tools to work.
"
exit 1
}
Crude but the best I can do since I'm neither the perl5 nor the linuxdoc-tools maintainer and I don't want to spend to much time on a version bump.
However, the MacPorts doesn't understand exit 1 and ui_error won't stop execution on its own.
How do I stop the execution so not to waste the users time on a build which will otherwise fail right at the end.
Use return -code error "error message", or the shorthand for the same thing, error "error message".
Note that you should use ui_error before that to print a human-readable message for the user – while the error message is also being printed, it can sometimes get lost in the output.
Additionally, note that $prefix/bin/perl is a build dependency of linuxdoc-tools. If it is also needed at runtime, you should submit a pull request that adds depends_run path:bin/perl:perl5 to the port rather than attempting to fix this bug in your port.

What may be the reason why I get "Test ignored." message while running any test class in Apex?

Tests ignored: 14, passed: 0
Whenever I run any test class I get this type of messages:
Test ignored.
Test method AccountAddressHelperTest.testInvalidBillingCountry was
never reported as completed. Trigger.AllOppLineItemTriggers: line 188,
column 31: Method does not exist or incorrect signature: void
updateMISROnOpportunity(Map<Id,OpportunityLineItem>,
Map<Id,OpportunityLineItem>) from the type OppLineItemHelper
You have compilation failures. What happens if you hit Setup -> Classes -> Compile all?
You or your colleague edited something, probably in file OppLineItemHelper. Or some other file this file reuses. And now not everything compiles, dependencies aren't met. So best thing SF can do is skip these tests.
You can edit like that in sandboxes, SF will not always block you. But compile errors will prevent deployment to prod even before any tests are run

More than 2 notebooks with R code in zeppelin failing with error sparkr intrepreter not responding

I have met with a strange issue in running R notebooks in zeppelin(0.7.2).
Spark intrepreter is in per note Scoped mode and spark version is 1.6.2 and SPARK_HOME is set.
Please find the steps below to reproduce the issue:
Create a notebook (Note1) and run any r code in a paragraph. I ran the following code.
%r
rdf <- data.frame(c(1,2,3,4))
colnames(rdf) <- c("myCol")
sdf <- createDataFrame(sqlContext, rdf)
withColumn(sdf, "newCol", sdf$myCol * 2.0)
Create another notebook (Note2) and run any r code in a paragraph. I ran the same code as above.
Till now everything works fine.
Create third notebook (Note3) and run any r code in a paragraph. I ran the same code. This notebook fails with the error
org.apache.zeppelin.interpreter.InterpreterException: sparkr is not
responding
What I understood from the analysis is that the process created for sparkr interpreter is not getting killed properly and this makes every third model to throw an error while executing. The process will be killed on restarting the sparkr interpreter and another 2 models could be executed successfully. ie, For every third model run using the sparkr interpreter, the error is thrown.
Help me to fix the problem.
you need to set spark.r.numRBackendThreads larger than 2. By default it is 2 which means you can only have 2 threads for RBackend. Since you are using scoped mode per note, each note will consume one thread for RBackend, so you can only run 2 note.

Resources