Jenkins Performance Report Plugin does not fail step - jenkins-plugins

I am using Maven to Run Jmetere and Jenkins Perfreport plugin using pipeline script to generate report. I have set threshold in the plugin but the report step does not fail(Even if it shows that threshold is reached ) and it only fails the build in the end. This causes problem as all the other steps after perf report executes which I dont want.
How do I make the build fail at the perf report step so that I can fail the build then and there. I only want to propagate the build if the perf report is pass which is causing problem as the perf step always passes and the final status of the build is set to fail

If you're using Maven you should be able to fail the test on Maven task execution stage by adding the next stanza to your pom.xml:
<execution>
<id>jmeter-check-results</id>
<goals>
<goal>results</goal>
</goals>
</execution>
For the time being you can only manipulate error percentage (defaults to 0) so if your "thresholds" are more complex you might want to switch from Maven to Taurus which has quite powerful Pass/Fail Criteria Subsystem

Related

How to enable usage of Flight Recorder in Surefire plugin

I have added below properties in pom.xml for surefire plugin but its still filing with below error
<argLine>-XX:MaxPermSize=512m</argLine>
<argLine>-XX:+UnlockCommercialFeatures</argLine>
<argLine>-XX:+FlightRecorder</argLine>
Error: To use Flight Record first Unlock UnlockCommercialFeatures.
Any suggesting with sample pom.xml configurtion
My observation is when I run Maven build this feature is enabled for main thread but not enabled for the fork thread created by surefire pluin
You cannot use argLine repeatedly. You need to use something along the lines of
<plugin>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<argLine>#{argLine} -XX:+UnlockCommercialFeatures -XX:+FlightRecorder -XX:StartFlightRecording=duration=1000s,filename=surefire.jfr</argLine>
</configuration>
</plugin>
instead. A similar setup works for failsafe.

upgraded flink from 1.10 to 1.11, met error 'No ExecutorFactory found to execute the application'

java.lang.IllegalStateException: No ExecutorFactory found to execute the application.
at org.apache.flink.core.execution.DefaultExecutorServiceLoader.getExecutorFactory(DefaultExecutorServiceLoader.java:84)
at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1803)
at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1713)
at org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:74)
at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1699)
at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1681)
at com.cep.StaticAlarmGenerationEntryTest.main(StaticAlarmGenerationEntryTest.java:149)
The error I met after I upgraded FLink from 1.10 to 1.11, and my IDE is eclipse.
and I tried to add artifactId:flink-clients_${scala.binary.version}, but still failed. Anybody already met and solved this issue, pls tell me. thanks a lot.
See the 1.11 release note, where you now have to add an explicit dependency on flink-clients.
I have solved my problem this way :
1.Use Java 8, apparently Flink has some sort of problem with Java 11 or 15
2.Change all the Scopes to "compile"
you can change scopes in this path : Project Structure → Modules → Dependencies → There is a table that one of its column's name is Scope
I found the reason why the error happened event I add dependency flink-clients. I upgraded Flink from 1.10 to 1.11, just edited the version of Flink, but not changed Scala version. Here also should change Scala version to 2.12. And the project is generated base on 1.10 archetype and Scala version is 2.11. Every time I build the project, it use the 2.11 environments.
So the fast way to solve this issue is :
use mvn archetype:generate -DarchetypeGroupId=org.apache.flink -DarchetypeArtifactId=flink-quickstart-java -DarchetypeVersion=1.11.0 this command to generate new project.
copy all your old code to this new project. You will find that the flink-clinets already added in the pom.xml.
I had this problem when I was packaging up the flink job in a shaded jar. When shading, if there are files with the same name in multiple jars it will overwrite the file as it unzips each jar into the new shaded jar.
Flink uses the file META-INF/services/org.apache.flink.core.execution.PipelineExecutorFactory to discover different executor factories and this file is present in multiple jars, each with different contents.
To fix this, I had to tell the maven-shade plugin to combine these files together as it came across them, and this solved the problem for me.
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.1.0</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<shadedArtifactAttached>true</shadedArtifactAttached>
<shadedClassifierName>job</shadedClassifierName>
<transformers>
<!-- add this to combine the PipelineExecutorFactory files into one -->
<transformer implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">
<resource>META-INF/services/org.apache.flink.core.execution.PipelineExecutorFactory</resource>
</transformer>
</transformers>
...

Unable to deploy profile to JBoss Fuse 6.1 using fabric8:deploy

I am trying to deploy a simple Camel route to my local instance of JBoss Fuse 6.1 (GA release). I am trying to use the fabric8-maven-plugin to do so, but everytime I run fabric8:deploy, I receive the following error
Failed to execute goal io.fabric8:fabric8-maven-plugin:1.0.0.redhat-379:deploy (default-cli) on project filemover: Error executing: IO-Error while contacting the server: org.apache.http.NoHttpResponseException: The target server failed to respond
Here is my current plugin-definition from my pom file
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>fabric8-maven-plugin</artifactId>
<version>1.0.0.redhat-379</version>
<configuration>
<profile>sample-filemover</profile>
<parentProfiles>feature-camel</parentProfiles>
<features>mq-fabric-camel</features>
</configuration>
</plugin>
My ~/.m2/user/settings.xml file contains the following server definition
<server>
<id>fabric8.upload.repo</id>
<username>admin</username>
<password>admin</password>
</server>
And I am executing the following mvn command
mvn fabric8:deploy -Dmaven.test.skip=true
(I realize I am skipping the tests, but I am trying to just deploy a profile at this time)
I can log onto the management console just fine and can see the root container no problem. Have I missed something in the configuration of Fuse to enable this?
I just spend some hours in the same problem.
Just change the version to 1.1.0.CR5 and you can deploy using mvn fabric8:deploy
Best Regards
are you sure you are trying on the correct server? Just put the fabric URL server where you want to deploy. I'm using that plugin version and it works correctly.
I know may be it's too late, but just for let you know, you can deploy in any server with
mvn clean fabric8:deploy -Dfabric8.jolokiaUrl=http://localhost:8181/jolokia
Cheers

Maven surefire report's filename has redundant spaces when the tests are written using cucumber

I am using maven-surefire-plugin version 2.16 along with tests written using cucumber. The .xml and .txt reports do come up fine in the sure-fire reports folder. The text within the the .xml and .txt file along with the names has a lot of extra spaces that is proportional to the number of step definitions executed cumulatively. Also the filename has a number of spaces proportional to number of steps executed. In case I run a lot of tests then the file simply does not save with the following exception
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project **:
Execution default-test of goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test failed: org.apache.maven.surefire.util.NestedRuntimeException: null;
nested exception is org.apache.maven.surefire.report.ReporterException:
Unable to create file for report: /Users/kgupta2/git/$$$$$$$$/target/surefire-reports/Scenario: Using $$$$$ .txt (File name too long);
nested exception is java.io.FileNotFoundException: /Users/kgupta2/git/$$$$$$$$/target/surefire-reports/Scenario: Using $$$$$$$$ .txt (File name too long)
-> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException
Clearly the filename becomes long and I have verified that it is proportional to the number of steps executed. I am running cucumber using JUnit.
Here is my configuration for maven-surefire-plugin
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.16</version>
<configuration>
<testSourceDirectory>${project.basedir}/src/test/java</testSourceDirectory>
<includes>
<include>**/CucumberJunitRun.java</include>
</includes>
</configuration>
<dependencies>
<dependency>
<groupId>org.apache.maven.surefire</groupId>
<artifactId>surefire-junit47</artifactId>
<version>2.16</version>
</dependency>
</dependencies>
</plugin>
I am unable to understand why these additional spaces pop up.
I got this to work. It was an issue with incorrect configuration of plugins. Since I was using JUnit to run my cucumber tests, I went ahead and removed all references of TestNG from my pom and that got it to work magically. Some references of TestNG were coming from my parent pom. Just checked that my effective pom had some dependency of testNG. Really do not have a great clue as to why this was happening. But the correct configuration fixed a bunch of other errors that were creeping in

How to disable maven release plugin check local modifications?

I use maven release plugin. In my pom exists and Ant task that automatically fix some properties files with additional information. This fixes should not be in SCM.
But maven don't finish with success for error:
Cannot prepare the release because you have local modifications
Does it possible to set some parameters to don't check local modifications?
Thanks.
I'm not very familiar with maven-release-plugin, but I can see that there is a checkModificationExcludes property that you can use for your purpose. The config should be somewhat like this:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-release-plugin</artifactId>
<version>2.2.2</version>
<configuration>
...
<checkModificationExcludes>
<checkModificationExclude>file_1</checkModificationExclude>
<checkModificationExclude>dir_1/file_2</checkModificationExclude>
</checkModificationExcludes>
</configuration>
</plugin>
We were trying to run the release from jenkins but it always failed with the same message...
Cannot prepare the release because you have local modifications
... which was weird, because we use jenkins to check out and build the latest sources before releasing.
We finally figured out that the problem was that we were building on a Windows node and some file paths were too long which caused the maven-release-plugin to complain about local modifications. Switching to a linux node solved this problem.
Removing my project's target folder from source control fixed this issue for me.
I would suggest fixing your build process, so that it does not 'fix up' files that are under SCM. There are several ways of doing this, the simplest is to copy properties files in question to some directory under ${project.build.outputDirectory}, and run your Ant script on these files, rather than originals
#AndrewLogvinov's answer is half way there. The other half is mentioned in this:
I discovered that the comparison
in org.apache.maven.shared.release.phase.ScmCheckModificationsPhase.execute()
strips the path and compares file names only. So, I changed my pom to ignore
application.properties instead of ${thewholepath}/applications properties.
Lo and behold, success.
For some weird reason, you can't include paths in this tag. You can only specify file names.
It worked on Windows in my case by renaming job name to a shorter length (About 20 characters).
In my case, Jenkins job name and also svn branch name were longer (About 40 characters).
If some file paths are too long it causes the maven-release-plugin to complain about local modifications. Jenkin creates a local workspace with the job name
e.g. For and as below, Jenkins job while releasing will start complaining about local modifications.
D:\dev.env\data\jenkins\jobs\<LongerJobName>\workspace\<LongerBranchName>\testCommonJar\src\main\java\com.example.webservice.service.TestServiceImpl.java

Resources