How can I run a job in Jenkins n-times? - batch-file

Is it possible in Jenkins to create a job, that will run n-times?
I would like to write a script in configuration (windows batch command / groovy) which allows me to do it. In this script, I would like to have an array with parameters and then run this job with each parameter in the cycle. It should look like that:
paramArray [] = ["a","b","c"];
for(int i = 0; i < paramArray.length; i++)
{
//Here I want to run this job with each parameter
job.run(paramArray[i]);
}
Please, help me with that issue.

I found the answer!
We need to create 2 pipelines in Jenkins: downstream and upstream jobs.
1. The downstream job is parameterized and take 1 string parameter in 'General' section
Then, it just prints the choosing parameter in 'Pipeline' section:
Here is the result of this downstream job:
2. The upstream job has an array with all possible parameters for a downstream job.
And in the loop, it runs a downstream job with each parameter from an array.
In the result, an upstream job will run a downstream job 3 times with each parameter.
:)

I think you can't run Jenkins job according to your above code. But you can configure the cronjob in Jenkins using “Build periodically” for run Jenkins job periodically.
go to Jenkins job > Configure > tick Build periodically in build Triggers
and put cronjob syntax like below image and Save.
This job runs every 15 minutes. and also you can set a specific time in the schedule.

Please see the example from https://jenkins.io/doc/book/pipeline/jenkinsfile/ in "Handling parameters" section: With a Jenkinsfile like this (example here copied from that doc), you can launch "Build with parameters" and give params. Since you want multiple params, you can delimit them with , or ; or something else based on your data. You just need to parse the input params to get the values using the delimiter you chose.
pipeline {
agent any
parameters {
string(name: 'Greeting', defaultValue: 'Hello', description: 'How should I greet the world?')
}
stages {
stage('Example') {
steps {
echo "${params.Greeting} World!"
}
}
}
}

Related

Jenkins Pipeline: How to access pipeline parameters and insert them into batch file argument

I am trying to use the parameterized build and I am getting lost in some sources right now. I thought it would be possible to access the build paramters with a "params.myParameter" call. Or a ${env.myParamter} but neither appear to be working. This does not cause syntax errors currently, but the parameter is being read as "" for the if statement and the param access is being used as a string for the batch call.
What I have is the following:
pipeline {
agent {
node {
label 'dummy_label'
}
}
options {
skipDefaultCheckout true
}
stages {
stage('Setup') {
steps {
// For debug purposes, state which user we are running as
bat 'whoami'
// Now do the checkout
timeout(time: 60, unit: 'MINUTES') {
retry(10) {
// Sometimes this fails because Integrity is terrible.
checkout scm
sleep 5
// Check to see if "VerifyCheckout.bat" exists. Sometimes Integrity doesn't resync the files... just the folders. Again, Integrity is terrible.
bat ".\\HWIOAPPL\\Test\\Jenkins_scripts\\VerifyCheckout.bat"
}
}
dir("${env.PROJECT_ROOT}\\HWIOAPPL\\Test\\Jenkins_scripts") {
bat ".\\QAC_CLI_Setup.bat"
script{
if(params.Release_Tag != "mainline"){
bat ".\\ZIP_Software.bat 'params.Release_Tag'.zip"
}
}
}
}
} //More stages and stuff after this
You need to specify your parameters in a parameters block at the top of the file - between the options and stages block should do (https://www.jenkins.io/doc/book/pipeline/syntax/#parameters).
ex.
parameters {
string(name: 'Release_Tag', defaultValue: '1.0', description: 'Version of the Release')
}

Map subtask_id to TaskManager in Flink

I have an operator with parallelism=256 running on 128 task managers. Each time when I get a checkpoint failure, it happens at the same subtask of this operator, for example it's always subtask 129 that gets stuck and blocks the checkpointing. I want to understand what happened to this subtask by examining logs of the task manager that subtask 129 is running on. Is there a way in Flink to map subtask id to the corresponding Task Manager?
The taskmanager.log files contain the names of the deployed tasks including their sub task index. You could simply search for the TASK_NAME (129/256) in all taskmanager.log files.
I was able to find not a trivial, but working solution to get the required map at runtime programmatically.
The main idea is that the Rest Endpoint /jobs/:jobid/vertices/:vertexid provides the necessary information for a specific vertex in format
{
"id": "804e...",
"name": "Map -> Sink",
...
"subtasks": [
{
"subtask": 0,
"host": "ip-10-xx-yy-zz:36ddd"
},
...
]
}
The main difficulty was to get the web interface url programmatically. I was able to get it this way (probably, there is a more elegant solution):
val env = FieldUtils
.readField(getRuntimeContext.asInstanceOf[StreamingRuntimeContext], "taskEnvironment", true)
.asInstanceOf[RuntimeEnvironment]
try {
println("trying to get cluster client...")
val client = new RestClusterClient[String](env.getTaskManagerInfo.getConfiguration, "rest")
return client.getWebInterfaceURL
} catch {
case e: Exception =>
println("Failed to get cluster client : ")
e.printStackTrace()
}
Given the web interface url, I simply made an http call to it and constructed the map.

End Gatling simulation when scenario fails BUT generate a report

I have code which currently will not run my scenario if it fails;
//Defined outside of the scenario scope
var simulationHealthy = true
//defined within the scenario
.exec((session: io.gatling.core.session.Session) => {
if (session.status == KO) {
simulationHealthy = false
}
session
})
However my simulation keeps running until the duration set for the simulation is over, though the scenario will not keep executing.
What I would like to do is to have a scenario fail under conditions I define (similar to assertions) and for the entire simulation to fail at that point as well, and also generate a report.
Thanks
Edit: I am running these tests within the IntelliJ IDE. Ending the simulation programmatically is required.
You might run the test itself without report and produce the report with a second call for just the report generation from the simulation.log
Run Simulation w/o report (-nr flag), i.e.
gatling.sh -nr -s YourSimulationClass
Generate Report (-ro flag):
gatling.sh -ro yoursimulation
(your simultation is the path underneath the results folder, which can be specified with -rf, which contains the simulation.log file)
In IntelliJ you can define another LaunchConfiguration to be executed before. So you define an action for executing Gatling Test (with -nr flag) and another configuration for report generation (with -ro flag), that executes the Gatling Test run action before.
Alternatively you could use the gatling-maven-plugin and define two executions (run, report) with the same flags.
Edit
According to this group thread you could execute your steps conditionally or mute them. The condition could be the presence of an error, but anything else as well. If the condition depends on global state i.e. a global variable, it would mute all users (unlike exitHereIfFailed)
For example:
val continue = new AtomicBoolean(true)
val scn = scenario("MyTest")
.exec(
doIf(session => continue.get) {
exec(http("request_0").get("/home").check(status.is(200)))
.exec((session: io.gatling.core.session.Session) => {
if (session.status == KO) {
continue.set(false)
}
session
})
})
As said, this only stops sending requests to the SUT. Seems there is no other option at the moment (apart from System.exit(0))
You can use exitHereIfFailed in ScenarioBuilder returned by exec().
.exec(http("login")
.post("/serviceapp/api/auth/login")
...
.check(status.is(200))))
.exitHereIfFailed
.pause(1)
.exec(http("getProfileDetails")
.get("/serviceapp/api/user/get_profile")
.headers(authHeader("${token}"))
.check(status.is(200)))
Thanks to #GeraldMücke 's suggestion of using system.exit I've come up with a work around. Still no where close to ideal but it does the job.
The problems are
Still have to manually generate the report from the log that is created when gatling is run
The user has to constantly manage how long the scenario lasts for both items as I don't know a way to have a scenario last the length of a simulation
This is obviously a "proof of concept" it has nothing in the code to define failure over thresholds etc like the asserts and checks available in Gatling itself
Here's the code. I've nested simulations within the setUp function because it fits the criteria of the work I am doing currently, allowing me to run multiple simulations within the main simulation.
FailOverSimulation and ScenarioFailOver are the classes that need to be added to the list; obviously this only adds value when you are running something that loops within the setUp.
import java.util.concurrent.atomic.AtomicBoolean
import io.gatling.commons.stats.KO
import io.gatling.core.Predef._
import io.gatling.core.scenario.Simulation
import io.gatling.http.Predef._
import scala.concurrent.duration._
object ScenarioTest {
val get = scenario("Test Scenario")
.exec(http("Test Scenario")
.get("https://.co.uk/")
)
.exec((session: io.gatling.core.session.Session) => {
if(session.status == KO) {
ScenarioFailOver.exitFlag.set(true)
}
session
})
}
object TestSimulation {
val fullScenario = List(
ScenarioTest.get.inject(constantUsersPerSec(1).during(10.seconds))
)
}
object ScenarioFailOver {
var exitFlag = new AtomicBoolean(false)
val get = scenario("Fail Over")
.doIf(session => exitFlag.get()) {
exec(s => {
java.lang.System.exit(0)
s
})
}
}
object FailOverSimulation {
val fullScenario = List(
ScenarioFailOver.get.inject(constantUsersPerSec(1).during(10.seconds))
)
}
class SimulateTestEnding extends Simulation {
setUp(
FailOverSimulation.fullScenario
::: TestSimulation.fullScenario
).protocols(
)
}

How to verify contents of config file before running build in TeamCity

I would like to add a step to our TeamCity configuration that checks the contents of a web.config file.
If a key value isn't found, that means someone's checked it in with the wrong value and we shouldn't proceed with the build.
(TeamCity is running on a Windows server.)
I'm able to add a command line runner that executes the appropriate FIND command, but I can't capture the output from the FIND and use it within a subsequent IF statement.
Attempts to embed the FIND within a FOR statement have been unsuccessful.
Any suggestions?
You can use PowerShell runner:
$key = 'your-key'
[xml] $config = Get-Content path\to\web.config
$value = $config.SelectSingleNode("/configuration/appSettings/add[#key='$key']/#value")
if ($value.Value -ne 'your expected value') {
exit 1
}
You could create a simple nant script using xmlPeek to check the value

TFS2010 - capture unittest execution time in team build

It is possible to 'capture' or persist the time it takes per unittest, when running a team build on TFS2010. Ideally saving it to database (like a loadtest can save it to a result store).
Thanks in advance!
If you run Visual Studio unit tests during build, you can choose to publish the test results to the server, then later you can query the test run and results to find out the duration of each test result.
The code to query the test results per build looks like this:
var tcmService = TeamProjectCollection.GetService<ITestManagementService>();
var tcmProject = tcmService.GetTeamProject(TeamProjectName);
ITestRun testRun = tcmProject.TestRuns.ByBuild(BuildUri).First();
ITestCaseResultCollection results = testRun.QueryResults();
foreach (ITestResult result in results) { Console.WriteLine(result.Duration); }
You will need to obtain the team project collection, know the team project name and the build uri. This code assumes that your build has only one published test run, though that sometimes is not true because you can publish other test runs to the same build after it is completed.
Hope this helps.

Resources