Why the Failure message is duplicated inside the step also in the html report - allure

I have an allure xml something like below
<test-case start="1456329345978" stop="1456329352078" status="failed">
<name>SYSTEM_ASSERT_ERRORS</name>
<title>Assert that there are no critical errors</title>
<description type="text">Assert that there are no critical errors </description>
<failure>
<message>AssertionError: FAIL: 3 out of 4 test steps failed</message>
<stack-trace>java.lang.AssertionError: FAIL: 3 out of 4 test steps failed: Not true that <3> is equal to <0>
</stack-trace>
</failure>
<steps>
<step start="1456329346636" stop="1456329351963" status="passed">
<name>verifyLogs[JSchException]</name>
<title>verifyLogs: [JSchException]</title>
</step>
<step start="1456329346636" stop="1456329351965" status="failed">
<name>verifyLogs["Error code"]</name>
<title>verifyLogs: ["Error code"]</title>
</step>
</steps>
<attachments>
<attachment title="log.txt" source="1be56864-2031-4e82-bfdd-ce5152a5bbc2-attachment.txt" type="text/plain"/>
</attachments>
On converting this to html,
Failure is shown at the top of the test case pane and also just below the first failed step?
I don't see a reason for displaying it again in there? Any reasons?

The reason might be to make failed test step visible. Also, if you have many test steps you can check the stack trace of failed step without scrolling in the very top.
If you catch exceptions inside your test or map them, real exception thrown by test step could be different from one reported by Allure. It is considered bad practice (as well as managing threads inside your test) and is not supported by Allure, AFAIK.
So, I don't see an issue here for now.

Related

Unable to push vespa metrics to cloudwatch

Basically I need to monitor vespa metrics and for that I am trying to implement method to push metrics to cloudwatch.
This is the document that I am referring to https://docs.vespa.ai/documentation/monitoring.html
I have added the credentials file and putMetricData permission in the IAM role attached. The service.xml file that I am using in my code looks like this:
<admin version="2.0">
<adminserver hostalias="admin0"/>
<configservers>
<configserver hostalias="admin0"/>
</configservers>
<monitoring>
</monitoring>
<metrics>
<consumer id="my-cloudwatch">
<metric-set id="vespa" />
<cloudwatch region="ap-south-1" namespace="vespa">
<shared-credentials file="~/.aws/credentials" profile="default" />
</cloudwatch>
</consumer>
</metrics>
</admin>
I have deployed the code using vespa-deploy prepare application.zip && vespa-deploy activatebut I am still not seeing any metrics updated on my cloudwatch.
Also, I have tried to add:
<monitoring>
<interval>1</interval>
<systemname>vespa</systemname>
</monitoring>
But getting this error when deploying:
Request failed. HTTP status code: 400
Invalid application package: default.default: Error loading model: XML error in services.xml: element "interval" not allowed here; expected the element end-tag [9:16], input:
How can I fix this issue. Or atleast debug the issue that I am facing.
I suggest to use absolute path to the credentials file, as the ~ may not resolve to the directory you intended at runtime.
A couple more things:
I recommend using the default metric set, as vespa contains a lot of metrics, which will drive your CloudWatch cost higher. If you need additional metrics, you can add them with the metric tag inside consumer.
The monitoring element doesn't do anything useful in this context, so you should just drop it.
If you still don't see any metrics, please check for warnings or errors in the vespa log file (use vespa-logfmt) and the Telegraf log file: /opt/vespa/logs/telegraf/telegraf.log. (Vespa uses Telegraf internally to emit metrics to CloudWatch.)

Android Manifest merger error in Codename One

In a bare bones project, I added these build hints:
android.gradleDep=compile 'com.erikagtierrez.multiple_media_picker:multiple-media-picker:1.0.5'
android.min_sdk_version=23
I would like to import the following Android library to make a CN1Lib (that requires at least Android SDK 23):
https://github.com/erikagtierrez/multiple-media-picker
To be short: I spent one day trying to import that, I also experimented with Android Studio and with suggestions found on Stack Overflow (trying to make a custom .aar), without success.
Could you help me to import that library? There is manifest merger error.
In fact, the issue reported by the build server is:
* What went wrong:
Execution failed for task ':processReleaseManifest'.
> Manifest merger failed : Attribute application#label value=(BareBones) from AndroidManifest.xml:15:17-42
is also present at [com.erikagtierrez.multiple_media_picker:multiple-media-picker:1.0.5] AndroidManifest.xml:23:9-41 value=(#string/app_name).
Suggestion: add 'tools:replace="android:label"' to <application> element at AndroidManifest.xml:15:3-43:103 to override.
I also tried to add the build hint:
android.xapplication_attr=tools:replace="android:label"
as suggested by the previous error, without success.
In the last case, I get:
Merging result: ERROR
/tmp/build1659178556337293135xxx/Test/src/main/AndroidManifest.xml:15:3-43:103 Error:
tools:replace specified at line:15 for attribute android:label, but no new value specified
/tmp/build1659178556337293135xxx/Test/src/main/AndroidManifest.xml Error:
Validation failed, exiting
-- Merging decision tree log ---
The last full log is here: https://gist.github.com/jsfan3/dd6c23f86a2ac949f996910c8cece62b
Thank you
This is happening because our code things you injected android:label on your own and doesn't inject it to avoid collision...
Change the code to this:
android.xapplication_attr=tools:replace="android:label" android:label="App Name"

How to retry failed scenario in Behave using python

Can someone please tell me how I can run a failed test again in Behave using Python?
I want to re-run the failed test case automatically if it fails.
The behave library actually has a RerunFormatter which can help you rerun the failing scenarios of your previous test-run. It creates a text file of all your failing scenarios like:
# -- file:rerun.features
# RERUN: Failing scenarios during last test run.
features/auth.feature:10
features/auth.feature:42
features/notifications.feature:67
To use the RerunFormatter all you need to do is put it in your behave configuration file (behave.ini):
# -- file:behave.ini
[behave]
format = rerun
outfiles = rerun_failing.features
To rerun the failing scenarios, use this command:
behave #rerun_failing.features
I know that's a later answer but it could help others.
There is another approach that also could help, it's implementing it under the environment.py file, you could do the retry by a specific tag.
Provides support functionality to retry scenarios a number of times
before their failure is accepted. This functionality can be helpful
when you use behave tests in an unreliable server/network
infrastructure.
For example, I am running tag "#smoke_test" on CI, so I choose this tag to patch with retry condition.
First, on your environment.py import the following:
# -- file: environment.py
from behave.contrib.scenario_autoretry import patch_scenario_with_autoretry
Then add the method:
# -- file:environment.py
#
def before_feature(context, feature):
for scenario in feature.scenarios:
if "smoke_test" in scenario.effective_tags:
patch_scenario_with_autoretry(scenario, max_attempts=3)
*max_attempts are by default set as 3. I just described there to make it explicit that you can actually set how many retries you want.

Camel pollEnrich and xml 'prettyPrint'

I am attempting to use Camel's pollEnrich feature, but it is not behaving as I would like... I'm not saying it's broken, but wondering if there is a way to get the behavior I desire. That is, I have an XML (blueprint) defined route that goes something like this:
<route>
<from uri="direct:a" />
<pollEnrich uri="http:www.somewebsite.com?format=application/xml" />
<to uri="log:com.acme?level=WARN&showStreams=true" />
</route>
Now, the response normally comes back just fine (e.g., in a web browser). The problem seems to be that it is not just on one line, and for some reason Camel reads each line, sequentially into the same buffer, starting at character zero... so what we end up with is one messy line in the output from the pollEnrich. That is, the to uri="log... line prints messages like:
2015-05-26 13:55:26,379 | WARN | a.distr.topic.B] | contentEnrich |
? ? | 142 - org.apache.camel.camel-core - 2.12.0.redhat-610379 |
Exchange[ExchangePattern: InOnly, BodyType:
org.apache.camel.converter.stream.InputStreamCache, Body:
<?xml versi</ElementStatus> ]pe></Status>nd>gin>ys for this element.</Reason>>ame>
(last line vertically offset for emphasis)
I cannot seem to find a way to tell Camel that the result will be in 'prettPrint' format... Anyone know how? The documentation seems to suggest that this option does not exist--in which case, I'd consider this to be a bug... though I suppose a person could argue that a custom aggregation strategy should be used (and I'd disagree with that person, citing the simplicity of this case) :)
UPDATE#1: even using org.apache.camel.processor.aggregate.UseLatestAggregationStrategy produces the same effect. (i.e., usage as below)
<bean id="latestStrat"
class="org.apache.camel.processor.aggregate.UseLatestAggregationStrategy" />
<route>
<from uri="direct:a" />
<pollEnrich uri="http:www.somewebsite.com?format=application/xml" strategyRef="latestStrat" />
<to uri="log:com.acme?level=WARN&showStreams=true" />
</route>
...going to cross fingers and try org.apache.camel.processor.aggregate.GroupedExchangeAggregationStrategy, but am guessing there is a configuration limitation with Camel always treating EOL characters as message delimiters.
UPDATE #2 - additional information:
The REST(GET) response received (tested with wget) has blank lines and null fields--but no carriage returns (^M). I've tried both the http and http4 components--same result. There is a leading <?xml version="1.0" encoding="UTF-8"?>, but no namespace/style info. I also just noticed that tab characters have been used for the pretty-ish indents. In sum, the response looks like:
<?xml version="1.0" encoding="UTF-8"?><ElementStatus>
<Flag>false</Flag>
<CODE>XYZ</CODE>
<Locale>Western</Locale>
...
(again, where the whitespace indenting has been done with tabs--AND the blank lines have a few tabs too)
...so the "answer" is that this is an apparent limitation of (or bug within) the log component's "showStreams" logic. I implemented Processor in a <bean>, routed the Exchange output from my pollEnrich to that <bean>, and logged the contents instead, and that matches exactly the output from wget.
FYI: this is camel-paxlogging (2.12.0.redhat-610379) - not sure what underlying version of camel that corresponds to, as I don't seem to have a jboss-parent-2.12.0 pom in my maven repo--which is strange, since I have other jboss-parent poms--and the red hat documentation doesn't seem to get into version composition.
FYI#2: and on a related note, when I use GroupedExchangeAggregationStrategy it does produce a List<Exchange>, BUT it behaves effectively the same as UseLatestAggregationStrategy -- i.e., 'grouped' produces a one-item List<Exchange> that only has the pollEnrich result, where 'latest' produces a standalone Exchange object that has only the pollEnrich result. Seems like an error in either GroupedExchangeAggregationStrategy or pollEnrich ... but this will likely be the topic of my next Stack-post.

How can I handle errors I get from the liquibase updateDatabase ant task

I'm currently working on some ant for applying liquibase changes to databases.
I'd like to be able to handle errors that I get in ant from the liquibase updateDatabase task. Here is what I have right now in my build file (bear in mind what I have now works fine I just need to be able to handle errors I might get from running the liquibase).
<target name="update_db" depends="prepare">
<taskdef resource="liquibasetasks.properties">
<classpath refid="classpath"/>
</taskdef>
<updateDatabase
changeLogFile="${db.changelog.file}"
driver="${database.driver}"
url="jdbc:mysql://localhost/${db.name}"
username="${user}"
password="${password}"
promptOnNonLocalDatabase="not local database"
dropFirst="false"
classpathref="classpath"
/>
</target>
Currently when I get an error I get something similar to this (from a situation I created to demonstrate):
BUILD FAILED
MYPATH\build.xml:15: The following error occurred while executing this line:
MYPATH\\build.xml:117: liquibase.exception.MigrationFailedException: Migration failed
for change set PATH/2.20.9/tables.xml::FFP-1384::AUSER:
Reason: liquibase.exception.DatabaseException: Error executing SQL ALTER TABLE
test.widget ADD full_screen BIT(1) DEFAULT 0: Duplicate column name 'full_screen'
.............. and a the wall of text continues
Ideally I would like to be able to get the return code (rather than this block of text) from liquibase into ant and then based on that do something such as :
<echo message="this failed because ${reason}"/>
but not limited to that.
Is there some way for me to obtain the return code from liquibase? My best guess is that similar to the ant exec task, by default the return code is ignored and I'm hoping there is some way for me to get at it. Any suggestions welcome.
edit: Vaguely similar question https://stackoverflow.com/questions/17856564/liquibase-3-0-2-logging-to-error-console
The ant contrib trycatch task has enabled me to handle the error, which turned out to be a suitable fix since the stack trace is actually useful to see anyway.
http://ant-contrib.sourceforge.net/tasks/tasks/trycatch.html
<trycatch property="foo" reference="bar">
<try>
<fail>Tada!</fail>
</try>
<catch>
<echo>In <catch>.</echo>
</catch>
<finally>
<echo>In <finally>.</echo>
</finally>
</trycatch>
You need to download the ant contrib jar and if you don't want to place it in your ANT_HOME then you can use
<taskdef resource="net/sf/antcontrib/antcontrib.properties">
<classpath>
<pathelement location="PATH TO JAR"/>
</classpath>
</taskdef>

Resources