I have 'n' no. of jobs, which I want to start simultaneously. Is it feasible in Jenkins? I tried using DSL plugin, work flow plugin. I have used 'parallel' method. I have my list of jobnames in an array/list and want to run them parallel. Please help.
Currently I'm iterating the jobnames in an array, so they start one by one, instead I want them to start parallel. How this can be achieved ?
This will fullfill your requirement
GParsPool.withPool(NO.OF.THREADS){
sampleList.eachParallel{
callYourMethod(it)
}
}
I've done this before. The solution I found that worked for me was through use of upstream/downstream jobs, visualizing using the Build Pipeline Plugin.
Essentially, create one blank 'bootstrap' job that you use to kick off all the jobs that you require to run in parallel, with each of the parallel jobs having the bootstrap job as an upstream trigger.
The option that we have been using to trigger multiple jobs in parallel is the https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Trigger+Plugin we then have a master job that just needs a comma separated list of all the jobs to call.
In my opinion, the best way (it will up your personal skill ;-) ) is to use Jenkins Pipeline !!
You can find a simple example of code just below
// While you can't use Groovy's .collect or similar methods currently, you can
// still transform a list into a set of actual build steps to be executed in
// parallel.
// Our initial list of strings we want to echo in parallel
def stringsToEcho = ["a", "b", "c", "d"]
// The map we'll store the parallel steps in before executing them.
def stepsForParallel = [:]
// The standard 'for (String s: stringsToEcho)' syntax also doesn't work, so we
// need to use old school 'for (int i = 0...)' style for loops.
for (int i = 0; i < stringsToEcho.size(); i++) {
// Get the actual string here.
def s = stringsToEcho.get(i)
// Transform that into a step and add the step to the map as the value, with
// a name for the parallel step as the key. Here, we'll just use something
// like "echoing (string)"
def stepName = "echoing ${s}"
stepsForParallel[stepName] = transformIntoStep(s)
}
// Actually run the steps in parallel - parallel takes a map as an argument,
// hence the above.
parallel stepsForParallel
// Take the string and echo it.
def transformIntoStep(inputString) {
// We need to wrap what we return in a Groovy closure, or else it's invoked
// when this method is called, not when we pass it to parallel.
// To do this, you need to wrap the code below in { }, and either return
// that explicitly, or use { -> } syntax.
return {
node {
echo inputString
}
}
}
Related
I currently use junit5, wiremock and restassured for my integration tests. Karate looks very promising, yet I am struggling with the setup of data-driven tests a bit as I need to prepare a nested data structures which, in the current setup, looks like the following:
abstract class StationRequests(val stations: Collection<String>): ArgumentsProvider {
override fun provideArguments(context: ExtensionContext): java.util.stream.Stream<out Arguments>{
val now = LocalDateTime.now()
val samples = mutableListOf<Arguments>()
stations.forEach { station ->
Subscription.values().forEach { subscription ->
listOf(
*Device.values(),
null
).forEach { device ->
Stream.Protocol.values().forEach { protocol ->
listOf(
null,
now.minusMinutes(5),
now.minusHours(2),
now.minusDays(1)
).forEach { startTime ->
samples.add(
Arguments.of(
subscription, device, station, protocol, startTime
)
)
}
}
}
}
}
return java.util.stream.Stream.of(*samples.toTypedArray())
}
}
Is there any preferred way how to setup such nested data structures with karate? I initially thought about defining 5 different arrays with sample values for subscription, device, station, protocol and startTime and to combine and merge them into a single array which would be used in the Examples: section.
I did not succeed so far though and I am wondering if there is a better way to prepare such nested data driven tests?
I don't recommend nesting unless absolutely necessary. You may be able to "flatten" your permutations into a single table, something like this: https://github.com/intuit/karate/issues/661#issue-402624580
That said, look out for the alternate option to Examples: which just might work for your case: https://github.com/intuit/karate#data-driven-features
EDIT: In version 1.3.0, a new #setup life cycle was introduced that changes the example below a bit.
Here's a simple example:
Feature:
Scenario:
* def data = [{ rows: [{a: 1},{a: 2}] }, { rows: [{a: 3},{a: 4}] }]
* call read('called.feature#one') data
and this is: called.feature:
#ignore
Feature:
#one
Scenario:
* print 'one:', __loop
* call read('called.feature#two') rows
#two
Scenario:
* print 'two:', __loop
* print 'value of a:', a
This is how it looks like in the new HTML report (which is in 0.9.6.RC2 and may need more fine tuning) and it shows off how Karate can support "nesting" even in the report, which Cucumber cannot do. Maybe you can provide feedback and let us know if it is ready for release :)
As stated in the title, I'm attempting to loop over an ArrayList of strings in a Jenkins Groovy Pipeline script (using scripted Pipeline syntax). Let me lay out the entire "problem" for you.
I start with a string of filesystem locations separated by spaces: "/var/x /var/y /var/z ... " like so. I loop over this string adding each character to a temp string. And then when I reach a space, I add that temp string to the array and restart. Here's some code showing how I do this:
def full_string = "/var/x /var/y /var/z"
def temp = ""
def arr = [] as ArrayList
full_string.each {
if ( "$it" == " " ) {
arr.add("$temp") <---- have also tried ( arr << "$temp" )
temp = ""
} else {
temp = "$temp" + "$it"
}
}
// if statement to catch last element
See, the problem with this is that if I later go to loop over the array it decides to loop over every individual char instead of the entire /var/x string like I want it to.
I'm new to Groovy so I've been learning as I build this pipeline. Using Jenkins version 2.190.1 if that helps at all. I've looked around on SO and Groovy docs, as well as the pipeline syntax docs on Jenkins. Can't seem to find what I've been looking for. I'm sure that my solution is not the most elegant or efficient, but I will settle for understanding how it works first before trying to squeeze the most performance out of it.
I found this question but this was similarly unhelpful: Dynamically adding elements to ArrayList in Groovy.
Edit: I'm trying to translate old company c-shell build scripts into Jenkins Pipelines. My initial string is an environment variable available on all our nodes that I also need to have available inside the Pipeline.
TL;DR - I need to be able to create an array from space separated values in a string, and then be able to loop over said array and each "element" be a complete string instead of a single char so that I can run pipeline steps properly.
Try running this in your Jenkins script console (your.jenkins.url.yourcompany.com/script):
def full_string = "/var/x /var/y /var/z"
def arr = full_string.split(" ")
for (i in arr) {
println "now got ${i}"
}
Result:
now got /var/x
now got /var/y
now got /var/z
I could not figure out how to iterate over a collection and execute statements one by one with Tiberius.
My current code looks like this (simplified):
use futures::Future;
use futures_state_stream::StateStream;
use tokio::executor::current_thread;
use tiberius::SqlConnection;
fn find_files(files: &mut Vec<String>) {
files.push(String::from("file1.txt"));
files.push(String::from("file2.txt"));
files.push(String::from("file3.txt"));
}
fn main() {
let mut files: Vec<String> = Vec::new();
find_files(&mut files);
let future = SqlConnection::connect(CONN_STR)
.and_then(|conn| {
conn.simple_exec("CREATE TABLE db.dbo.[Filenames] ( [Spalte 0] varchar(80) );")
})
.and_then(|(_, conn)| {
for k in files.iter() {
let sql = format!("INSERT INTO db.dbo.Filenames ([Spalte 0]) VALUES ('{}')", k);
&conn.simple_exec(sql);
}
Ok(())
});
current_thread::block_on_all(future).unwrap();
}
I got the following error message
error[E0382]: use of moved value: `conn`
--> src/main.rs:23:18
|
20 | .and_then(|(_, conn)| {
| ---- move occurs because `conn` has type `tiberius::SqlConnection<std::boxed::Box<dyn tiberius::BoxableIo>>`, which does not implement the `Copy` trait
...
23 | &conn.simple_exec(sql);
| ^^^^ value moved here, in previous iteration of loop
I'm new to Rust but I know there is something wrong with the use of the conn variable but nothing works.
There are actual two questions here:
The header question: how to perform multiple sequential statements using tiberius?
The specific question concerning why an error message comes from a specific bit of code.
I will answer them separately.
Multiple statements
There are many ways to skin a cat. In TDS (the underlying protocol Tiberius is implementing) there is the possibility to execute several statements in a single command. They just need to be delimited by using semicolon. The response from such an execution is in Tiberius represented a stream of futures, one for each statement.
So if your chain of statements is not too big to fit into one command, just build one string and send it over:
fn main() {
let mut files: Vec<String> = Vec::new();
find_files(&mut files);
let stmts = vec![
String::from(
"CREATE TABLE db.dbo.[Filenames] ( [Spalte 0] varchar(80) )")]
.into_iter()
.chain(files.iter().map(|k|
format!("INSERT INTO db.dbo.Filenames ([Spalte 0]) VALUES ('{}')", k)))
.collect::<Vec<_>>()
.join(";");
let future
= SqlConnection::connect(std::env::var("CONN_STR").unwrap().as_str())
.and_then(|conn|
conn.simple_exec(stmts)
.into_stream()
.and_then(|future| future)
.for_each(|_| Ok(())));
current_thread::block_on_all(future).unwrap();
}
There is some simple boilerplate in that example.
simple_exec returns an ExecResult, a wrapper around the individual statement's future results. Calling `into_stream() on that provides a stream of those futures.
That stream of results need to be forced to be carried out, one way of doing that is to call and_then, which awaits each future and does something with it.
We don't actually care about the results here, so we just do a noop for_each.
But, say that there is a lot of statements, more than can fit in a single TDS command, then there is a need to issue them separately (another case is when the statements themselves depend on earlier ones). A version of that problem is solved in How do I iterate over a Vec of functions returning Futures in Rust? .
Then finally, what is your specific error? Well conn is consumed by simple_exec, so it cannot be used afterwards, that is what the error tells you. If you want to use the connection after that execution is done you have to use the Future it returns, which is wrapping the mutated connection. I defer to the link above, on one way to do that.
I want to run my simulation against two set of data. One set gives empty feeder issue and other one don't. I want to write a generic code in gatling which can handle both data sets. I would like to avoid simple if-else conditions for the variable I am setting in feeder. Also changing the data is not an option available to me.
In short, I want my execution to be skipped if my feeder is empty. Is it possible through gatling way ?
exec(
randomSwitch(33.0 -> feed(data1.random).exec(step1),
33.0 -> feed(data2.random).exec(step2),
34.0 -> feed(data3.random).exec(step3)
))
You can try something like this
scenario("Requests").feed(orderRefs).group("Groups") {
asLongAs(session => jobsQue.length > 0) {
exec { session => you code
}
I'm writing a Hadoop application calculates map data at a certain resolution. My Input files are tiles of a map, named according to the QuadTile principle. I need to subsample those, and stitch those together until I have a certain higher-level tile which covers a larger area but at a lower resolution. Like zooming out in google maps.
Currently my Mapper subsamples tiles and my reducer combines tiles a a certain level and forms tiles of one level up. So for so good. But depending on which tile I need, I need to repeat those map and reduce steps a x times, which I have not been able to do so far.
What would be the best way to do so? Is it possible without explicitly saving the tiles in some temp directory and starting a new mapreduce Job on those temp dirs until I get what I want? What I think would be the perfect solution is something roughly like 'while(context.hasMoreThanOneKey()){iterate mapreduce}'.
Following an answer, I have now written a class TileJob which extends Job. However, the mapreduce is still not chained. Could you tell me what I'm doing wrong?
public boolean waitForCompletion(boolean verbose) throws IOException, InterruptedException, ClassNotFoundException{
if(desiredkeylength != currentinputkeylength-1){
System.out.println("In loop, setting input at " + tempout);
String tempin = tempout;
FileInputFormat.setInputPaths(this, tempin);
tempout = (output + currentinputkeylength + "/");
FileOutputFormat.setOutputPath(this, new Path(tempout));
System.out.println("Setting output at " + tempout);
currentinputkeylength--;
Configuration conf = new Configuration();
TileJob job = new TileJob(conf);
job.setJobName(getJobName());
job.setUpJob(tempin, tempout, tiletogenerate, currentinputkeylength);
return job.waitForCompletion(verbose);
}else{
//desiredkeylength == currentkeylength-1
System.out.println("In else, setting input at " + tempout);
String tempin = tempout;
FileInputFormat.setInputPaths(this, tempin);
tempout = output;
FileOutputFormat.setOutputPath(this, new Path(tempout));
System.out.println("Setting output at " + tempout);
currentinputkeylength--;
Configuration conf = new Configuration();
TileJob job = new TileJob(conf);
job.setJobName(getJobName());
job.setUpJob(tempin, tempout, tiletogenerate, currentinputkeylength);
currentinputkeylength--;
return super.waitForCompletion(verbose);
}
}
Usually you kick a mapreduce step off by having a driver class main method that configures the Job, Configuration and format type (input and output). Once everything's ready to go that main method calls Job::waitForCompletion() which submits the job and waits for the job to complete before continuing.
You can wrap some of that logic in a loop that repeatedly calls Job::waitForCompletion() until your criteria is met. You can implement your criteria using counters. Put logic into your reduce() method to set or increment a counter with the number of keys. Your loop in the driver class can get the value of that (distributed) counter from the Job instance and you code your while expression using that value.
What file locations you use is up to you. Inside this driver loop you can change the file location for the inputs and outputs, or keep them the same.
I should probably add that you ought to go ahead and create a new Job and Configuration instance inside the loop. I don't know that those objects are reusable in this situation.
public static void main(String[] args) {
int keys = 2;
boolean completed = true;
while (completed & (keys > 1)) {
Job job = new Job();
// Do all your job configuration here
completed = job.waitForCompletion();
if (completed) {
keys = job.getCounter().findCounter("Total","Keys").getValue();
}
}
}