How to generate an allure report dynamically - allure

Am new to allure report. I am using testng and Java8. Everytime i run the tests, I need to do "allure serve allure-results". Is there a way by which the results get automatically updated instead of starting the command everytime?

Step 1: Add dependencies of AllureReportBuilder from Maven repository
Step 2: Add below code for generate allure Report.
This will generate the Allure Report folder.
new AllureReportBuilder("1.5.4", new File("target/allure-report")).unpackFace();
new AllureReportBuilder("1.5.4", new File("target/allure-report")).processResults(new File("target/allure-results"));
Note- Above code belongs to allure1

I faced the same issue in python. So, what I came up with is to run the terminal command through python script in pytest's conftest.py.
import subprocess
def pytest_sessionfinish(session, exitstatus):
"""
Run command to set allure path and generate allure report after the test run is over
"""
# Running pytest can result in six different exit codes:
# Exit code 0: All tests were collected and passed successfully
# Exit code 1: Tests were collected and run but some of the tests failed
# Exit code 2: Test execution was interrupted by the user
# Exit code 3: Internal error happened while executing tests
# Exit code 4: pytest command line usage error
# Exit code 5: No tests were collected
print '\nrun status code:', exitstatus
if (exitstatus != 2 or exitstatus != 3 or exitstatus!= 4 or exitstatus != 5):
command_to_export_allure_path= ['export PATH=$PATH:/usr/local/bin:/usr/local/bin/allure-commandline/allure-2.7.0/bin/']
command_generate_allure_report= ['allure generate --clean -o %s/Allure/ %s'%(allure_report_dir, allure_report_dir)]
print command_to_export_allure_path
print command_generate_allure_report
subprocess.call(command_to_export_allure_path, shell=True)
subprocess.call(command_generate_allure_report, shell=True)
I am sure there should be someway to run terminal command through java code as well

Related

AOSP Build TARGET_PRODUCT fails

I'm trying to build an external tool with AOSP. My OS is Linux, Distribution ArchLinux (i3wm), but to compile AOSP I use Ubuntu in Docker (https://android.googlesource.com/platform/build/+/master/tools/docker)
First step:
# init repo
repo init -u https://android.googlesource.com/platform/manifest -b android-8.0.0_r36 --depth=1
repo sync
. build/envsetup.sh # set up environment
lunch aosp_arm-eng # select target to build
Second step: select tool and build
cd external/selinux
mma -j48
Output:
ninja: error: unknown target 'MODULES-IN-'
15:41:55 ninja failed with: exit status 1
make: *** [run_soong_ui] Error 1
make: Leaving directory `/home/user/aosp'
#### make failed to build some targets (6 seconds) ###
Another tool
cd external/wpa_supplicant_8
mma -j48
Output:
ninja: error: unknown target 'MODULES-IN-external-wpa_supplicant_8'
15:41:55 ninja failed with: exit status 1
make: *** [run_soong_ui] Error 1
make: Leaving directory `/home/user/aosp'
#### make failed to build some targets (2 seconds) ###
This happens with any aosp generic target:
Lunch menu... pick a combo:
1. aosp_arm-eng # fails
2. aosp_arm64-eng # fails
3. aosp_mips-eng # fails
4. aosp_mips64-eng # fails
5. aosp_x86-eng # fails
6. aosp_x86_64-eng # fails
7. full_fugu-userdebug # works
8. aosp_fugu-userdebug # works
9. car_emu_arm64-userdebug # fails
10. car_emu_arm-userdebug # fails
11. car_emu_x86_64-userdebug # fails
12. car_emu_x86-userdebug # fails
13. mini_emulator_arm64-userdebug # fails
14. m_e_arm-userdebug # fails
15. m_e_mips64-eng # fails
16. m_e_mips-userdebug # fails
17. mini_emulator_x86_64-userdebug # fails
18. mini_emulator_x86-userdebug # fails
19. aosp_dragon-userdebug # works
20. aosp_dragon-eng # works
21. aosp_marlin-userdebug # works
22. aosp_marlin_svelte-userdebug # works
23. aosp_sailfish-userdebug # works
24. aosp_angler-userdebug # works
25. aosp_bullhead-userdebug # works
26. aosp_bullhead_svelte-userdebug # works
27. hikey-userdebug # works
I want to compile some binary tools in all arch: arm, arm64, x86 and x86_64.
Why aosp_arm-eng does not work? Or how can I change the CPU architecture from non generic aosp target?
Using tapas does not work.
You probably need to make a full AOSP build before trying to use mm... shortcuts - looks like some build script files were not generated yet.
clear all the binaries from /out dir with make clean command & then try to make a full build
Actually there is no need to do a full build. Just open your Android.mk or Android.bp of the module you want to build, and look for variable MODULE_NAME.
Then from the root of the project make
mm $MODULE_NAME
It should build all the dependencies you need first.
I suggest that use the mmm command for building. Also make sure that the directory you are pointing to contains a Android.bp or Android.mk
mmm /external/selinux
Also I suggest to clean the outputs by remove the out directory or simply run:
make clean
If still you have that problem, remove the --depth=1 argument on repo init and sync it again. that argument limits the commit fetching from remote branch.
Use below commands to compile module from root dir
Goto root dir
make clean
source build/envsetup.sh
lunch "select option"
make "module name" -j8

how to set up 'chromedriver.exe' executable in PATH on gitlab ci

class Application(object):
def __init__(self):
self.driver = webdriver.Chrome(executable_path='C:/Users/Admin/365_python_test/chromedriver.exe')
this is my initialization of webdriver. It works fine on my machine, but when I push code to gitlab CI I get error 'chromedriver.exe' executable needs to be in PATH. As I understand I must set up Path in .gitlab-ci.yml in 'before script' section but I don't know how to do it. I tried different script I found here but it doesn't work. I tried also:
self.driver = webdriver.Chrome(ChromeDriverManager().install())
which doesn't work on CI as well:
venv/lib/python3.7/site-
packages/selenium/webdriver/common/service.py:111: WebDriverException
---------------------------- Captured stdout setup ---------------------
--------
Checking for linux64 chromedriver:2.46 in cache
There is no cached driver. Downloading new one...
Trying to download new driver from
http://chromedriver.storage.googleapis.com/2.46/chromedriver_linux64.zip
Unpack archive /root/.wdm/chromedriver/2.46/linux64/chromedriver.zip
=========================== 1 error in 1.30 seconds
============================
ERROR: Job failed: exit code 1
Help, please!

Gradle command syntax for executing TESTNG tests as a group

I have some tests as below:
#Test(groups={"smoke"})
public void Test1(){...}
#Test(groups={"smoke", "regression"})
public void Test2(){...}
#Test(groups={"regression"})
public void Test3(){...}
In build.gradle file I have below:
task smoketests(type: Test){
useTestNG() {
suites "src/test/resources/testng.xml"
includeGroups "smoke"
}
}
I need to have a gradle syntax to run the smoke/regression tests only using commandline.
I have tried this:
./gradlew clean test -P testGroups="smoke"
if I run that, build is successful as below:
:clean
:compileJava
:processResources UP-TO-DATE
:classes
:compileTestJava
:processTestResources UP-TO-DATE
:testClasses
:test
BUILD SUCCESSFUL
Total time: 42.253 secs
But it never execute actual tests. Need help
You have created a custom test task, so you need to use that instead of test in your command line:
gradlew clean smoketests

More than 2 notebooks with R code in zeppelin failing with error sparkr intrepreter not responding

I have met with a strange issue in running R notebooks in zeppelin(0.7.2).
Spark intrepreter is in per note Scoped mode and spark version is 1.6.2 and SPARK_HOME is set.
Please find the steps below to reproduce the issue:
Create a notebook (Note1) and run any r code in a paragraph. I ran the following code.
%r
rdf <- data.frame(c(1,2,3,4))
colnames(rdf) <- c("myCol")
sdf <- createDataFrame(sqlContext, rdf)
withColumn(sdf, "newCol", sdf$myCol * 2.0)
Create another notebook (Note2) and run any r code in a paragraph. I ran the same code as above.
Till now everything works fine.
Create third notebook (Note3) and run any r code in a paragraph. I ran the same code. This notebook fails with the error
org.apache.zeppelin.interpreter.InterpreterException: sparkr is not
responding
What I understood from the analysis is that the process created for sparkr interpreter is not getting killed properly and this makes every third model to throw an error while executing. The process will be killed on restarting the sparkr interpreter and another 2 models could be executed successfully. ie, For every third model run using the sparkr interpreter, the error is thrown.
Help me to fix the problem.
you need to set spark.r.numRBackendThreads larger than 2. By default it is 2 which means you can only have 2 threads for RBackend. Since you are using scoped mode per note, each note will consume one thread for RBackend, so you can only run 2 note.

Output from scalatest is not synchronized with output from the tests

I have a suite of scalatest tests that output information to the console using println as they run.
When I run the suite using the Eclipse Scala plug-in (using Run As ... / 3 ScalaTest - File from the context menu) there is additional output to the console about which tests pass and which fail. I guess this output is from the runner.
The problem is that the lines from my code and the lines from the runner are not interleaved sensibly. It's as if they are being printed from two different threads that aren't synchronized.
For example here is the output from a run
>>>>>>>>>>>>>Starting The parser should warn when the interface name at the end does not match >>>>>>>>>>>>>>>>>>
(interface Fred
interface Bob)
-----------------------------
File: <unknown> line: 2 column: 11 Name does not match.
----The AST after parsing-------------
[ IntfDeclNd( (), (), () ) ]
---------------------------------------
<<<<<<<<<<<<<Finished The parser should warn when the interface name at the end does not match <<<<<<<<<<<<<<<<<
>>>>>>>>>>>>>Starting The parser should parse a class with generic args >>>>>>>>>>>>>>>>>>
(class Fred{type a, type b extends B}() class)
- should parse multiline comment at end of file *** FAILED ***
Expected 0, but got 1 (TestsBase.scala:103)
- should fail on incomplete multiline comment
- should parse single line comments
- should allow a class name to be repeated at the end
- should warn when the class name at the end does not match
- should allow an interface name to be repeated at the end
- should warn when the interface name at the end does not match
----The AST after parsing-------------
The lines starting with "- should" or "Expected" come from the runner and you can see that a bunch of them are plunked in the middle of the output from one of my tests. Other output from the runner appears elsewhere, this isn't all of it.
My questions: Why is this happening? Is there some way to get the runner's output to coordinate with my output?
Most likely, the suites are running in parallel.
http://www.scalatest.org/user_guide/using_the_runner#executingSuitesInParallel
With the proliferation of multi-core architectures, and the often
parallelizable nature of tests, it is useful to be able to run tests
in parallel. [...]
The -P option may optionally be appended with a number (e.g. "-P10" --
no intervening space) to specify the number of threads to be created
in the thread pool. If no number (or 0) is specified, the number of
threads will be decided based on the number of processors available.
So basically pass -P1 to the runner. For eclipse, the place would possibly be the Arguments tab.

Resources