Gradle command syntax for executing TESTNG tests as a group - selenium-webdriver

I have some tests as below:
#Test(groups={"smoke"})
public void Test1(){...}
#Test(groups={"smoke", "regression"})
public void Test2(){...}
#Test(groups={"regression"})
public void Test3(){...}
In build.gradle file I have below:
task smoketests(type: Test){
useTestNG() {
suites "src/test/resources/testng.xml"
includeGroups "smoke"
}
}
I need to have a gradle syntax to run the smoke/regression tests only using commandline.
I have tried this:
./gradlew clean test -P testGroups="smoke"
if I run that, build is successful as below:
:clean
:compileJava
:processResources UP-TO-DATE
:classes
:compileTestJava
:processTestResources UP-TO-DATE
:testClasses
:test
BUILD SUCCESSFUL
Total time: 42.253 secs
But it never execute actual tests. Need help

You have created a custom test task, so you need to use that instead of test in your command line:
gradlew clean smoketests

Related

Jest running all files in import Tree

I've recently come across an interesting/peculiar side effect, wanted to know opinion of the experienced members out here.
When i run the command npm run test -- --testPathPattern="filePath" --coverage, I get the coverage info as follows -
Statements : 37.85% ( 5810/15350 )
Branches : 7.2% ( 547/7596 )
Functions : 10.66% ( 309/2898 )
Lines : 42.1% ( 5751/13660 )
================================================================================
5810 lines being run, 547 branches being tested is huge. I went debugging into this, and realized to an extent why this is happening.
All files in the import tree are are being run !!!
This could potentially why CI takes time, and testing gets heavy.
Can i get any pointers on fixing this please
It's not "all files in the import tree", it's all files period. The --coverage options collects the coverage stats of your codebase with tests, also generates a coverage report, that you probably don't need.
If this is a concern in the CI pipeline don't use the coverage option, just run the tests.

How to generate an allure report dynamically

Am new to allure report. I am using testng and Java8. Everytime i run the tests, I need to do "allure serve allure-results". Is there a way by which the results get automatically updated instead of starting the command everytime?
Step 1: Add dependencies of AllureReportBuilder from Maven repository
Step 2: Add below code for generate allure Report.
This will generate the Allure Report folder.
new AllureReportBuilder("1.5.4", new File("target/allure-report")).unpackFace();
new AllureReportBuilder("1.5.4", new File("target/allure-report")).processResults(new File("target/allure-results"));
Note- Above code belongs to allure1
I faced the same issue in python. So, what I came up with is to run the terminal command through python script in pytest's conftest.py.
import subprocess
def pytest_sessionfinish(session, exitstatus):
"""
Run command to set allure path and generate allure report after the test run is over
"""
# Running pytest can result in six different exit codes:
# Exit code 0: All tests were collected and passed successfully
# Exit code 1: Tests were collected and run but some of the tests failed
# Exit code 2: Test execution was interrupted by the user
# Exit code 3: Internal error happened while executing tests
# Exit code 4: pytest command line usage error
# Exit code 5: No tests were collected
print '\nrun status code:', exitstatus
if (exitstatus != 2 or exitstatus != 3 or exitstatus!= 4 or exitstatus != 5):
command_to_export_allure_path= ['export PATH=$PATH:/usr/local/bin:/usr/local/bin/allure-commandline/allure-2.7.0/bin/']
command_generate_allure_report= ['allure generate --clean -o %s/Allure/ %s'%(allure_report_dir, allure_report_dir)]
print command_to_export_allure_path
print command_generate_allure_report
subprocess.call(command_to_export_allure_path, shell=True)
subprocess.call(command_generate_allure_report, shell=True)
I am sure there should be someway to run terminal command through java code as well

Fail the Jenkins build if unit test execution time exceeds limit

I would like to fail my builds if ANY particular unit test execution time (not the summary tests run time) exceeds certain reasonable limit, say two seconds. I am using MSTest.
Thanks!
Use the timeout block to create a timeout failure. Here is an example from the Jenkins CI Jenkinsfile:
// We're wrapping this in a timeout - if it takes more than 180 minutes, kill it.
timeout(time: 180, unit: 'MINUTES') {
// See below for what this method does - we're passing an arbitrary environment
// variable to it so that JAVA_OPTS and MAVEN_OPTS are set correctly.
withMavenEnv(["JAVA_OPTS=-Xmx1536m -Xms512m -XX:MaxPermSize=1024m",
"MAVEN_OPTS=-Xmx1536m -Xms512m -XX:MaxPermSize=1024m"]) {
// Actually run Maven!
// The -Dmaven.repo.local=${pwd()}/.repository means that Maven will create a
// .repository directory at the root of the build (which it gets from the
// pwd() Workflow call) and use that for the local Maven repository.
sh "mvn -Pdebug -U clean install ${runTests ? '-Dmaven.test.failure.ignore=true -Dconcurrency=1' : '-DskipTests'} -V -B -Dmaven.repo.local=${pwd()}/.repository"
}
}

rpm and Yum don't believe a package is installed after Chef installs

Running chef-solo (Installing Chef Omnibus (12.3)) on centos6.6
My recipe has the following simple code:
package 'cloud-init' do
action :install
end
log 'rpm-qi' do
message `rpm -qi cloud-init`
level :warn
end
log 'yum list' do
message `yum list cloud-init`
level :warn
end
But it outputs the following:
- install version 0.7.5-10.el6.centos.2 of package cloud-init
* log[rpm-qi] action write[2015-07-16T16:46:35+00:00] WARN: package cloud-init is not installed
[2015-07-16T16:46:35+00:00] WARN: Loaded plugins: fastestmirror, presto
Available Packages
cloud-init.x86_64 0.7.5-10.el6.centos.2 extras
I am at a loss as to why rpm/yum and actually rpmquery don't see the package as installed.
EDIT: To clarify I am specifically looking for the following string post package install to then apply a change to the file (I understand this is not a very chef way to do something I am happy to accept suggestions):
rpmquery -l cloud-init | grep 'distros/__init__.py$'
I have found that by using the following:
install_report = shell_out('yum install -y cloud-init').stdout
cloudinit_source = shell_out("rpmquery -l cloud-init | grep 'distros/__init__.py$'").stdout
I can then get the file I am looking for and perform
Chef::Util::FileEdit.new(cloudinit_source.chomp(''))
The file moves based on the distribution but I need to edit that file specifically with in place changes.
Untested code, just to give the idea:
package 'cloud-init' do
action :install
notifies :run,"ruby_block[update_cloud_init]"
end
ruby_block 'update_cloud_init' do
block do
cloudinit_source = shell_out("rpmquery -l cloud-init | grep 'distros/__init__.py$'").stdout
rc = Chef::Util::FileEdit.new(cloudinit_source.chomp(''))
rc.search_file_replace_line(/^what to find$/,
"replacement datas for the line")
rc.write_file
end
end
ruby_block example taken and adapted from here
I would better go using a template to manage the whole file, what I don't understand is why you don't know where it will be at first...
Previous answer
I assume it's a compile vs converge problem. at the time the message is stored (and so your command is executed) the package is not already installed.
Chef run in two phase, compile then converge.
At compile time it build a collection of resources and at converge time it execute code for the resource to get them in the described state.
When your log resource is compiled, the ugly back-ticks are evaluated, at this time there's a package resource in the collection but the resource has not been executed, so the output is correct.
I don't understand what you want to achieve with those log resources at all.
If you want to test your node state after chef-run use a handler maybe calling ServerSpec as in Test-Kitchen.

How can I save protractor test results

Is there a way to output protractor test results to a file to be viewed outside of the command line after a test is run, including seeing detailed failures?
I found a nice clean way of saving the test results in a orderly fashion using Jasmine reporter.
How to install and configure Jasmine reporter:
Install Jasmine reporter:
npm install -g jasmine-reporters
Add the following to the protractor-config.js file:
onPrepare: function() {
require('jasmine-reporters');
jasmine.getEnv().addReporter(
new jasmineReporters.JUnitXmlReporter('outputxmldir', true, true));
}
Create the outputxmldir folder (This is where all the test outputs will be placed).
Run protractor and now the results will be exported to an XML file in the outputxmldir folder.
Just the test output is enough?
protractor conf.js > test.log
Cheers.
You can also set the resultJsonOutputFile option in the config file:
export.config = {
(...)
// If set, protractor will save the test output in json format at this path.
// The path is relative to the location of this config.
resultJsonOutputFile:'./result.json',
(...)
}
More details about the config file can be found at:
https://raw.githubusercontent.com/angular/protractor/master/docs/referenceConf.js

Resources