I've been using build pipeline plugin with Jenkins (v1.534) for a long time now and recently I've tried to create a pipeline with the same job (using different parameters) twice and it seems not possible. It looks like this:
Job A (param env=dev) -> Job B -> Job A (param env=qa)
Is this possible using build pipeline plugin (v1.4)?
You can try the Jenkins FLOW plugin... https://wiki.jenkins-ci.org/display/JENKINS/FLOW+Plugin
I think this is only possible if you have Job B automatically setup to trigger job A again, not manual build step.
Job B will automatically trigger downstream via the parameter trigger job plugin. This works fine if you use the automatic build downstream, but the manual hold feature post build job is not smart enough yet to continue on.
Jenkins also has massive failing in plucking upstream variables into downstream jobs, like if jobs are run out of order on a pipeline.
At my work, I duplicate jobs and chain them Build->Deploy to Dev->Deploy to QA->Deploy....XXX and so forth.
Related
Can anybody explain me why "Configuration" section of running job in Apache Flink Dashboard is empty?
How to use this job configuration in my flow? Seems like this is not described in documentation.
The configuration tab of a running job shows the values of the ExecutionConfig. Depending on the version of Flink you might will experience a different behaviour.
Flink <= 1.0
The ExecutionConfig is only accessible for finished jobs. For running jobs, it is not possible to access it. Once the job has finished or has been stopped/cancelled, you should be able to see the ExecutionConfig.
Flink > 1.0
The ExecutionConfig can also be accessed for running jobs.
I have setup a CI pipeline in Jenkins for my angular application. I have two simple jobs at the moment. Both my jobs share a common work space. The first job basically does build (npm install, bower install etc). It is working fine. The second job basically runs unit test(gulp test). I am using karma for test and phantom. So both jobs are running fine at the moment.
Eventually I will add more jobs for my integration test and code analysis etc. So basically each successful job will trigger the next one in the pipeline to run.
I was wondering is it possible to mark a job as unstable if something fails. For example in my second job where I run my unit testing, say that even one test case fails i would like to mark it unstable to that it doesn't trigger any further jobs. Is this possible to do and if so what is the most intelligent and efficient way to do it? And since i never had a test case fail i wouldn't know this but does JENKINS mark job unstable automatically if one test case fails ???
Does Jenkins mark [a build as] unstable automatically if one test case fails?
Yes. It will depend on the exact testing plugin you're using for Jenkins, but I imagine all of them should implement this correctly.
I would like to mark [a build] unstable to that it doesn't trigger any further [downstream] jobs
It sounds like you're already using the "Build other projects" post-build action. There you just need to make sure that the default "Trigger only if build is stable" option is selected.
Then, if any tests fail, the build will be unstable, and a downstream build will not be triggered.
I have created Selenium Web Driver test cases and running it in Maven.
Can we schedule Selenium test cases to run at user-given date/time.
I googled and found few options like (1) creating a batch file & then adding it in Windows scheduler or (2) Using Jenkins
Somewhere, Quartz Scheduler was given.
Is there any other better method for it or which is the best method among these options.
Thanks !!!
We use Jenkins since it offers a variety of options to connect to different source control systems, allows building your requirement in variety of ways using scripts, maven commands, ant etc. You can create dependent jobs, use variety of plugins that add additional functionality to Jenkins, share reports, send emails ....and can schedule jobs on a variety of parameters as well like based on time or based on a commit in your repo or based on your build being deployed. I haven't used Quartz scheduler, so no opinions on that.
What I want is to run my WPF automation tests (integration tests) in the continuous integration process when possible. It means, everytime something is pushed to the source control I want to trigger an event that starts the WPF automation tests. However integration tests are slower than unit tests that is why I would like to execute them in Parallel, in several Virtual Machines. Is there any framework or tools that allows me to run my WPF automation tests in parallel?
We use Jenkins. Our system tests are built on top of a proprietary framework written in C#.
Jenkins allows jobs to be triggered by SCM changes (SVN, Git, and Mercurial are all supported via plugins). It also allows jobs to be run on remote slaves (in parallel, if needed). You do need to configure your jobs and slaves by hand. Configuring jobs can be done with build parameters: say, you have only one job that accepts test id's as parameters, but it can run on several slaves; you can configure one trigger job that will start several test jobs on different slaves passing to them test id's as parameters.
Configuring slaves is made much easier when your slaves are virtual machines. You configure one VM and then copy it (make sure that node-specific information, such as Node Name is not hard-coded and is easily configurable).
The main advantages of Jenkins:
It's free
It has an extendable architecture allowing people to extend it via plugins. As a matter of fact, at this stage (unlike, say, a year and a half ago) everything I need to do can be done either via plugins or Jenkins HTTP API.
It's can be rapidly deployed. Jenkins runs in its own servlet container. You can deploy and start playing with it in less than an hour.
Active community. Check, for example, [jenkins], [hudson], and [jenkins-plugins] tags on SO.
I propose that you give it a try: play with it for a couple of days, chance are you'll like it.
Update: Here's an old answer I've liked recommending Jenkins.
Recently I've started using limited staging on my Google App Engine project. The data is still shared between all versions, but behaviour (especially user facing behaviour) is different.
Naturally when I implement something incredibly new it only runs on the latest version of my code and I don't feel like it should be backported to the older versions.
Some of this new functionality requires cron jobs to be run periodically, but I'm hitting a problem. I have to run a cron job to call the latest code, but this is what Google's documentation has to say about the issue:
Cron requests are always sent to the default version of the application.
The default version is the oldest because the first versions of the client code that went out to users weren't future proof and don't know how to select which API version to call.
So my question is, how can I get around this limitation and make a cron job that will call the latest rather than the default version of the application?
You can now specify a version using the target tag.
<target>version-2</target>
You can't change the cron jobs to run on a different version then the default.
Depending on how much time your cron job takes to run you could change your cron job script to to do a URLFetch to "http://latest.appname.appspot.com/cron_job_endpoint".
If you're cron job takes longer then 10 minutes to run, then I would design it in a way that you can chain the different tasks using task queues.