Jenkins Pipeline: How to access pipeline parameters and insert them into batch file argument - batch-file

I am trying to use the parameterized build and I am getting lost in some sources right now. I thought it would be possible to access the build paramters with a "params.myParameter" call. Or a ${env.myParamter} but neither appear to be working. This does not cause syntax errors currently, but the parameter is being read as "" for the if statement and the param access is being used as a string for the batch call.
What I have is the following:
pipeline {
agent {
node {
label 'dummy_label'
}
}
options {
skipDefaultCheckout true
}
stages {
stage('Setup') {
steps {
// For debug purposes, state which user we are running as
bat 'whoami'
// Now do the checkout
timeout(time: 60, unit: 'MINUTES') {
retry(10) {
// Sometimes this fails because Integrity is terrible.
checkout scm
sleep 5
// Check to see if "VerifyCheckout.bat" exists. Sometimes Integrity doesn't resync the files... just the folders. Again, Integrity is terrible.
bat ".\\HWIOAPPL\\Test\\Jenkins_scripts\\VerifyCheckout.bat"
}
}
dir("${env.PROJECT_ROOT}\\HWIOAPPL\\Test\\Jenkins_scripts") {
bat ".\\QAC_CLI_Setup.bat"
script{
if(params.Release_Tag != "mainline"){
bat ".\\ZIP_Software.bat 'params.Release_Tag'.zip"
}
}
}
}
} //More stages and stuff after this

You need to specify your parameters in a parameters block at the top of the file - between the options and stages block should do (https://www.jenkins.io/doc/book/pipeline/syntax/#parameters).
ex.
parameters {
string(name: 'Release_Tag', defaultValue: '1.0', description: 'Version of the Release')
}

Related

Apollo codegen:generate gives error "Generating query files with 'typescript' target"

I have backend on Express with Apollo Graph.
In my client React app im do next:
apollo codegen:generate --excludes=node_modules/* --includes=**/*.tsx --target typescript --tagName gql --outputFlat generate
I wait for folder generated, but this command gives me next error:
Error: There are multiple definitions for the herewas operation.
All operations in a project must have unique names.
If generating types, only the types for the first definition
found will be generated.at GraphQLClientProject.checkForDuplicateOperations
(....\node_modules\apollo\node_modules\apollo-language-server\lib\project\base.js:129:31)
.........
.........
Generating query files with 'typescript' target
> Apollo does not support anonymous operations
Also i have apollo.config.js:
module.exports = {
client: {
service: {
url: "http://localhost:4000/graphql"
}
}
}
I do not understand where to dig, the code took from Google search
Apollo does not support anonymous operations
You should give your query/mutation operation a name :
This will work:
query SOME_NAME {
users {
age
}
}
This will NOT work (note that query name is missing):
query {
users {
age
}
}
Also you can check this thread:
https://github.com/apollographql/apollo-tooling/issues/184
operation name required.
Something like below, required the name.
query {
users {
name
}
}

Hierarchical logging with console.log

This is my first React.js project and I added logging via npm debug following this tutorial.
My goal is have a human readable log of everything the program is doing, so that when I hit a bug, it's quick to diagnose. The app has a lot of tree logic that's determined by content in a CMS (the logic is also in the CMS), so the debugging needs to be clear as the content changes and gets more complicated.
My problem is that after logging everything, the stream is too flat. It's hard to know when big tasks are done (and which are the small tasks that need to be done in order for the big task to be done). My log looks like this right now:
My logging output right now.
Is there another logging system I should use? Or is there some kind of DevOps or Analytics tool out there that will automatically provide some better organization of my logs depending on whether or not they are nested functions?
Thanks in advance!
I was able to solve this problem by using console.group and console.groupEnd(); to start and end groups in the console. I ended up following the tutorial here.
I now have this component:
const COLOURS = {
trace: '#aaa',
info: 'blue',
warn: 'pink',
error: 'red'
};
class Log {
generateMessage(level, message, source, group) {
var textColor = COLOURS[level];
if(!group){
if(typeof message === "object"){
console.log("This is an object")
} else {
console.log("%c"+source+" || %c"+message, "color:#000;", "color:"+textColor+";")
}
} else if(group === "start") {
console.group("%c"+source+" || %c"+message, "color:#000;", "color:"+textColor+";")
} else if(group === "end"){
console.groupEnd();
}
}
trace(message, source, group) {
return this.generateMessage('trace', message, source, group);
}
info(message, source, group) {
return this.generateMessage('info', message, source, group);
}
warn(message, source, group) {
return this.generateMessage('warn', message, source, group);
}
error(message, source, group) {
return this.generateMessage('error', message, source, group);
}
}
export default new Log();
and then if I want to start a new grouping of a log, I can use (for example):
Log.trace("Lesson data successful retrieved from Contentful.", "Lesson.js", "start")
and if I want to end the group, I can use:
Log.trace(null,null,"end")
This results in my console now looking like this: my console now with grouped messages

How can I run a job in Jenkins n-times?

Is it possible in Jenkins to create a job, that will run n-times?
I would like to write a script in configuration (windows batch command / groovy) which allows me to do it. In this script, I would like to have an array with parameters and then run this job with each parameter in the cycle. It should look like that:
paramArray [] = ["a","b","c"];
for(int i = 0; i < paramArray.length; i++)
{
//Here I want to run this job with each parameter
job.run(paramArray[i]);
}
Please, help me with that issue.
I found the answer!
We need to create 2 pipelines in Jenkins: downstream and upstream jobs.
1. The downstream job is parameterized and take 1 string parameter in 'General' section
Then, it just prints the choosing parameter in 'Pipeline' section:
Here is the result of this downstream job:
2. The upstream job has an array with all possible parameters for a downstream job.
And in the loop, it runs a downstream job with each parameter from an array.
In the result, an upstream job will run a downstream job 3 times with each parameter.
:)
I think you can't run Jenkins job according to your above code. But you can configure the cronjob in Jenkins using “Build periodically” for run Jenkins job periodically.
go to Jenkins job > Configure > tick Build periodically in build Triggers
and put cronjob syntax like below image and Save.
This job runs every 15 minutes. and also you can set a specific time in the schedule.
Please see the example from https://jenkins.io/doc/book/pipeline/jenkinsfile/ in "Handling parameters" section: With a Jenkinsfile like this (example here copied from that doc), you can launch "Build with parameters" and give params. Since you want multiple params, you can delimit them with , or ; or something else based on your data. You just need to parse the input params to get the values using the delimiter you chose.
pipeline {
agent any
parameters {
string(name: 'Greeting', defaultValue: 'Hello', description: 'How should I greet the world?')
}
stages {
stage('Example') {
steps {
echo "${params.Greeting} World!"
}
}
}
}

End Gatling simulation when scenario fails BUT generate a report

I have code which currently will not run my scenario if it fails;
//Defined outside of the scenario scope
var simulationHealthy = true
//defined within the scenario
.exec((session: io.gatling.core.session.Session) => {
if (session.status == KO) {
simulationHealthy = false
}
session
})
However my simulation keeps running until the duration set for the simulation is over, though the scenario will not keep executing.
What I would like to do is to have a scenario fail under conditions I define (similar to assertions) and for the entire simulation to fail at that point as well, and also generate a report.
Thanks
Edit: I am running these tests within the IntelliJ IDE. Ending the simulation programmatically is required.
You might run the test itself without report and produce the report with a second call for just the report generation from the simulation.log
Run Simulation w/o report (-nr flag), i.e.
gatling.sh -nr -s YourSimulationClass
Generate Report (-ro flag):
gatling.sh -ro yoursimulation
(your simultation is the path underneath the results folder, which can be specified with -rf, which contains the simulation.log file)
In IntelliJ you can define another LaunchConfiguration to be executed before. So you define an action for executing Gatling Test (with -nr flag) and another configuration for report generation (with -ro flag), that executes the Gatling Test run action before.
Alternatively you could use the gatling-maven-plugin and define two executions (run, report) with the same flags.
Edit
According to this group thread you could execute your steps conditionally or mute them. The condition could be the presence of an error, but anything else as well. If the condition depends on global state i.e. a global variable, it would mute all users (unlike exitHereIfFailed)
For example:
val continue = new AtomicBoolean(true)
val scn = scenario("MyTest")
.exec(
doIf(session => continue.get) {
exec(http("request_0").get("/home").check(status.is(200)))
.exec((session: io.gatling.core.session.Session) => {
if (session.status == KO) {
continue.set(false)
}
session
})
})
As said, this only stops sending requests to the SUT. Seems there is no other option at the moment (apart from System.exit(0))
You can use exitHereIfFailed in ScenarioBuilder returned by exec().
.exec(http("login")
.post("/serviceapp/api/auth/login")
...
.check(status.is(200))))
.exitHereIfFailed
.pause(1)
.exec(http("getProfileDetails")
.get("/serviceapp/api/user/get_profile")
.headers(authHeader("${token}"))
.check(status.is(200)))
Thanks to #GeraldMücke 's suggestion of using system.exit I've come up with a work around. Still no where close to ideal but it does the job.
The problems are
Still have to manually generate the report from the log that is created when gatling is run
The user has to constantly manage how long the scenario lasts for both items as I don't know a way to have a scenario last the length of a simulation
This is obviously a "proof of concept" it has nothing in the code to define failure over thresholds etc like the asserts and checks available in Gatling itself
Here's the code. I've nested simulations within the setUp function because it fits the criteria of the work I am doing currently, allowing me to run multiple simulations within the main simulation.
FailOverSimulation and ScenarioFailOver are the classes that need to be added to the list; obviously this only adds value when you are running something that loops within the setUp.
import java.util.concurrent.atomic.AtomicBoolean
import io.gatling.commons.stats.KO
import io.gatling.core.Predef._
import io.gatling.core.scenario.Simulation
import io.gatling.http.Predef._
import scala.concurrent.duration._
object ScenarioTest {
val get = scenario("Test Scenario")
.exec(http("Test Scenario")
.get("https://.co.uk/")
)
.exec((session: io.gatling.core.session.Session) => {
if(session.status == KO) {
ScenarioFailOver.exitFlag.set(true)
}
session
})
}
object TestSimulation {
val fullScenario = List(
ScenarioTest.get.inject(constantUsersPerSec(1).during(10.seconds))
)
}
object ScenarioFailOver {
var exitFlag = new AtomicBoolean(false)
val get = scenario("Fail Over")
.doIf(session => exitFlag.get()) {
exec(s => {
java.lang.System.exit(0)
s
})
}
}
object FailOverSimulation {
val fullScenario = List(
ScenarioFailOver.get.inject(constantUsersPerSec(1).during(10.seconds))
)
}
class SimulateTestEnding extends Simulation {
setUp(
FailOverSimulation.fullScenario
::: TestSimulation.fullScenario
).protocols(
)
}

How to test file permissions using node.js?

How can I check to see the permissions (read/write/execute) that a running node.js process has on a given file?
I was hoping that the fs.Stats object had some information about permissions but I don't see any. Is there some built-in function that will allow me to do such checks? For example:
var filename = '/path/to/some/file';
if (fs.canRead(filename)) // OK...
if (fs.canWrite(filename)) // OK...
if (fs.canExecute(filename)) // OK...
Surely I don't have to attempt to open the file in each of those modes and handle an error as the negative affirmation, right? There's got to be a simpler way...
I am late, but, I was looking for same reasons as yours and learnt about this.
fs.access is the one you need. It is available from node v0.11.15.
function canWrite(path, callback) {
fs.access(path, fs.W_OK, function(err) {
callback(null, !err);
});
}
canWrite('/some/file/or/folder', function(err, isWritable) {
console.log(isWritable); // true or false
});
There is fs.accessSync(path[, mode]) nicely mentioned:
Synchronously tests a user's permissions for the file or directory specified by path. The mode argument is an optional integer that specifies the accessibility checks to be performed. Check File Access Constants for possible values of mode. It is possible to create a mask consisting of the bitwise OR of two or more values (e.g. fs.constants.W_OK | fs.constants.R_OK).
If any of the accessibility checks fail, an Error will be thrown. Otherwise, the method will return undefined.
Embeded example:
try {
fs.accessSync('etc/passwd', fs.constants.R_OK | fs.constants.W_OK);
console.log('can read/write');
} catch (err) {
console.error('no access!');
}
Checking readability is not so straightforward as languages like PHP make it look by abstracting it in a single library function. A file might be readable to everyone, or only to its group, or only to its owner; if it is not readble to everybody, you will need to check if you are actually a member of the group, or if you are the owner of the file. It is usually much easier and faster (not only to write the code, but also to execute the checks) to try to open the file and handle the error.
How about using a child process?
var cp = require('child_process');
cp.exec('ls -l', function(e, stdout, stderr) {
if(!e) {
console.log(stdout);
console.log(stderr);
// process the resulting string and check for permission
}
});
Not sure though if process and *child_process* share the same permissions.

Resources