Even though have installed "cucumber-tag-expressions 3.0.0" the behave command with "and" and "or" operator are not working - python-behave

Resources:
Repository:
https://github.com/anton-lin/tag-expressions-python
Behave Documentation for tags:
https://behave.readthedocs.io/en/latest/tag_expressions.html
package name: :
https://pypi.org/project/cucumber-tag-expressions/
My Feature File
#regression
Feature: showing off behave
#slow
Scenario: run a slow test
Given we have behave installed
When we implement a test
Then behave will test it for us!
#wip
Scenario: run a wip test
Given we have behave installed
When we implement a test
Then behave will test it for us!
#wip #slow
Scenario: run a wip and slow test
Given we have behave installed
When we implement a test
Then behave will test it for us!
Tried commands none of them working: getting results zero scenarios are run
behave --tags="not #slow" .\features\tutorial.feature
behave --tags="#slow and #wip" .\features\tutorial.feature
behave --tags="#slow or #wip" .\features\tutorial.feature
While command with single tag is working fine and executing only scenarios of that specific tag
Getting Below outcome with all three commands:
#regression
Feature: showing off behave # features/tutorial.feature:2
#slow
Scenario: run a slow test # features/tutorial.feature:5
Given we have behave installed # None
When we implement a test # None
Then behave will test it for us! # None
#wip
Scenario: run a wip test # features/tutorial.feature:11
Given we have behave installed # None
When we implement a test # None
Then behave will test it for us! # None
#wip #slow
Scenario: run a wip and slow test # features/tutorial.feature:17
Given we have behave installed # None
When we implement a test # None
Then behave will test it for us! # None
0 features passed, 0 failed, 1 skipped
0 scenarios passed, 0 failed, 3 skipped
0 steps passed, 0 failed, 9 skipped, 0 undefined
Took 0m0.000s

'and'/'or' usage as you have attempted is not available in python-behave 1.2.6.
For 1.2.6 try instead:
OR example:
behave --tags #slow,#wip
AND example:
behave --tags #slow, --tags #wip
Negative tag example:
behave --tags ~#slow
See further docs with
behave --tags-help
The reason for the confusion is that when you install behave, by default you pull the most recent stable version from places like pypi.org (so 1.2.6), while the behave docs refer to the "latest" version (1.2.7, which has been in development for quite some time now).
The same issue was raised and closed a while ago in github and closed.

Related

How do I fix the package testing error Julia?

When I test the NTFk package with command Pkg.test("NTFK"). I'm getting the below error.
ERROR: LoadError: Some tests did not pass: 1 passed, 1 failed, 0
errored, 0 broken. in expression starting at
C:\Users\lff19.julia\packages\NTFk\bvyOe\test\runtests.jl:17 ERROR:
Package NTFk errored during testing
For Test.jl, the package scans for ~/PACKAGENAME/test/runtests.jl
A "passed" test is self-explanitory.
A "failed" test means a test resulted in an unexpected value.
An "errored" test means the test was not able to be executed, it errored instead.
A "broken" test refers to a known failing test. Setting the test to "broken" means it will ignore the "fail" status.
So, the 1 failing test is just a single fail in the project's runtest.jl file. It is not a problem with your Pkg.test("NTFK") command, it is a problem within the source code. It should be relatively simple to figure out which test fails from the error/ your console's output.
Realistically, it should be the developer's responsibility to fix the testcase. Although, you could just as well "dev" the package( ] dev PACKAGENAME), effectively making yourself the maintainer for your local package, and going into the runtests.jl and fixing it yourself. Note that "dev"ing a package will move it to ~/.julia/dev .

Jest should only run on changed files for pre-commit hook

I have a pre-commit hook set up using jest and the --only-changed flag. However, sometimes my entire test suite will still run (800 tests!) even if I made a change in a single file.
I looked into some other jest flags like
--lastCommit Run all tests affected by file changes in
the last commit made. Behaves similarly to
`--onlyChanged`.
--findRelatedTests Find related tests for a list of source
files that were passed in as arguments.
Useful for pre-commit hook integration to
run the minimal amount of tests necessary.
--changedSince Runs tests related to the changes since the
provided branch. If the current branch has
diverged from the given branch, then only
changes made locally will be tested. Behaves
similarly to `--onlyChanged`. [string]
Yet they all have the same problem. When doing some digging, I learned that
under the hood "If the found file is a test file, Jest runs it, simple enough. If the found file is a source file, call it found-file.js, then any test files that import found-file.js and the test files that import any of the source files that themselves import found-file.js will be run."
I'm working on a project that's relatively new to me. I'm wondering if it's possible for me to get my pre-commit hook to ONLY run the edited test, not all affected tests, or if there is a way for me to track down this tree of "transitive inverse dependencies" and try to solve the problem with different imports or something.
Here is an example of some output from trying --find-related-tests
Test Suites: 2 failed, 309 passed, 311 total
Tests: 2 failed, 803 passed, 805 total
Snapshots: 308 passed, 308 total
Time: 102.366 s
Ran all test suites related to files matching /\/Users\/me\/repo\/project\/src\/modules\/dogs\/components\/spots\/SpotsSpotter.tsx/i.
> #dogsapp/project#1.0.0 test:staged
> jest --findRelatedTests --passWithNoTests "/Users/me/repo/project/src/modules/dogs/components/spots/SpotsSpotter.tsx"
ERROR: "husky:lint-staged" exited with 1.
husky - pre-commit hook exited with code 1 (error)
It's taking WAY too long when I just made a simple change in one file. Anyone know how I can track down why this is happening?
It seems like something similar was addressed here for the --watch flag: https://www.gitmemory.com/issue/facebook/jest/8276/483695303

UIOP does not recognize local-nicknames keyword

I'm attempting to make a Lisp package with uiop/package:define-package. I'm using SBCL, and have confirmed that package-local nicknaming ought to be supported:
* *features*
(:QUICKLISP :ASDF3.3 :ASDF3.2 :ASDF3.1 :ASDF3 :ASDF2 :ASDF :OS-UNIX
:NON-BASE-CHARS-EXIST-P :ASDF-UNICODE :X86-64 :GENCGC :64-BIT :ANSI-CL
:COMMON-LISP :ELF :IEEE-FLOATING-POINT :LINUX :LITTLE-ENDIAN
:PACKAGE-LOCAL-NICKNAMES :SB-CORE-COMPRESSION :SB-LDB :SB-PACKAGE-LOCKS
:SB-THREAD :SB-UNICODE :SBCL :UNIX)
* (uiop:featurep :package-local-nicknames)
T
Nevertheless, when I try to define a package that has local nicknames, it doesn't work:
(uiop/package:define-package #:foo
(:use #:cl)
(:local-nicknames (#:b #:binparse)))
debugger invoked on a SIMPLE-ERROR in thread
#<THREAD "main thread" RUNNING {1001878103}>:
unrecognized define-package keyword :LOCAL-NICKNAMES
Type HELP for debugger help, or (SB-EXT:EXIT) to exit from SBCL.
restarts (invokable by number or by possibly-abbreviated name):
0: [ABORT] Exit debugger, returning to top level.
(UIOP/PACKAGE:PARSE-DEFINE-PACKAGE-FORM #:FOO ((:USE #:CL) (:LOCAL-NICKNAMES (#:B #:BINPARSE))))
source: (ERROR "unrecognized define-package keyword ~S" KW)
0] 0
(binparse being another package I've made, which worked fine, but which did not happen to use local nicknaming).
What I've found of the uiop/package source seems to indicate that this shouldn't happen? Going by that, it should either work, or have a specific error message indicating the non-supported-ness of local nicknames (if somehow uiop:featurep is inaccurate or changing), but it shouldn't give a generic unknown-keyword error. At this point I'm not sure what I could be getting wrong.
The version of asdf that's included in releases of sbcl is based on asdf version 3.3.1 (November 2017), except bundled into only two (larger) lisp files (one for asdf and one for uiop) rather than breaking them up by purpose as is done in official releases of asdf. asdf added #+sbcl support for package-local nicknames in 3.3.3.2 (August 2019), and switched to the more general #+package-local-nicknames in 3.3.4.1 (April 2020) (the latest release version is 3.3.4, though, so that wouldn't be in yet anyway). So it's "just" a delay in pulling from upstream. Following the instructions on upgrading ASDF did the trick – extract the latest release tarball into ~/common-lisp/asdf and run (load (compile-file #P"~/common-lisp/asdf/build/asdf.lisp")) once, and future shells will use the updated version.

More than 2 notebooks with R code in zeppelin failing with error sparkr intrepreter not responding

I have met with a strange issue in running R notebooks in zeppelin(0.7.2).
Spark intrepreter is in per note Scoped mode and spark version is 1.6.2 and SPARK_HOME is set.
Please find the steps below to reproduce the issue:
Create a notebook (Note1) and run any r code in a paragraph. I ran the following code.
%r
rdf <- data.frame(c(1,2,3,4))
colnames(rdf) <- c("myCol")
sdf <- createDataFrame(sqlContext, rdf)
withColumn(sdf, "newCol", sdf$myCol * 2.0)
Create another notebook (Note2) and run any r code in a paragraph. I ran the same code as above.
Till now everything works fine.
Create third notebook (Note3) and run any r code in a paragraph. I ran the same code. This notebook fails with the error
org.apache.zeppelin.interpreter.InterpreterException: sparkr is not
responding
What I understood from the analysis is that the process created for sparkr interpreter is not getting killed properly and this makes every third model to throw an error while executing. The process will be killed on restarting the sparkr interpreter and another 2 models could be executed successfully. ie, For every third model run using the sparkr interpreter, the error is thrown.
Help me to fix the problem.
you need to set spark.r.numRBackendThreads larger than 2. By default it is 2 which means you can only have 2 threads for RBackend. Since you are using scoped mode per note, each note will consume one thread for RBackend, so you can only run 2 note.

Output from scalatest is not synchronized with output from the tests

I have a suite of scalatest tests that output information to the console using println as they run.
When I run the suite using the Eclipse Scala plug-in (using Run As ... / 3 ScalaTest - File from the context menu) there is additional output to the console about which tests pass and which fail. I guess this output is from the runner.
The problem is that the lines from my code and the lines from the runner are not interleaved sensibly. It's as if they are being printed from two different threads that aren't synchronized.
For example here is the output from a run
>>>>>>>>>>>>>Starting The parser should warn when the interface name at the end does not match >>>>>>>>>>>>>>>>>>
(interface Fred
interface Bob)
-----------------------------
File: <unknown> line: 2 column: 11 Name does not match.
----The AST after parsing-------------
[ IntfDeclNd( (), (), () ) ]
---------------------------------------
<<<<<<<<<<<<<Finished The parser should warn when the interface name at the end does not match <<<<<<<<<<<<<<<<<
>>>>>>>>>>>>>Starting The parser should parse a class with generic args >>>>>>>>>>>>>>>>>>
(class Fred{type a, type b extends B}() class)
- should parse multiline comment at end of file *** FAILED ***
Expected 0, but got 1 (TestsBase.scala:103)
- should fail on incomplete multiline comment
- should parse single line comments
- should allow a class name to be repeated at the end
- should warn when the class name at the end does not match
- should allow an interface name to be repeated at the end
- should warn when the interface name at the end does not match
----The AST after parsing-------------
The lines starting with "- should" or "Expected" come from the runner and you can see that a bunch of them are plunked in the middle of the output from one of my tests. Other output from the runner appears elsewhere, this isn't all of it.
My questions: Why is this happening? Is there some way to get the runner's output to coordinate with my output?
Most likely, the suites are running in parallel.
http://www.scalatest.org/user_guide/using_the_runner#executingSuitesInParallel
With the proliferation of multi-core architectures, and the often
parallelizable nature of tests, it is useful to be able to run tests
in parallel. [...]
The -P option may optionally be appended with a number (e.g. "-P10" --
no intervening space) to specify the number of threads to be created
in the thread pool. If no number (or 0) is specified, the number of
threads will be decided based on the number of processors available.
So basically pass -P1 to the runner. For eclipse, the place would possibly be the Arguments tab.

Resources