Mark Job as Unstable - angularjs

I have setup a CI pipeline in Jenkins for my angular application. I have two simple jobs at the moment. Both my jobs share a common work space. The first job basically does build (npm install, bower install etc). It is working fine. The second job basically runs unit test(gulp test). I am using karma for test and phantom. So both jobs are running fine at the moment.
Eventually I will add more jobs for my integration test and code analysis etc. So basically each successful job will trigger the next one in the pipeline to run.
I was wondering is it possible to mark a job as unstable if something fails. For example in my second job where I run my unit testing, say that even one test case fails i would like to mark it unstable to that it doesn't trigger any further jobs. Is this possible to do and if so what is the most intelligent and efficient way to do it? And since i never had a test case fail i wouldn't know this but does JENKINS mark job unstable automatically if one test case fails ???

Does Jenkins mark [a build as] unstable automatically if one test case fails?
Yes. It will depend on the exact testing plugin you're using for Jenkins, but I imagine all of them should implement this correctly.
I would like to mark [a build] unstable to that it doesn't trigger any further [downstream] jobs
It sounds like you're already using the "Build other projects" post-build action. There you just need to make sure that the default "Trigger only if build is stable" option is selected.
Then, if any tests fail, the build will be unstable, and a downstream build will not be triggered.

Related

Launch remote process for automated test using Kiwi tcms

Is it possible to have a kiwi-tcms test case launch an executable on a remote server in order to execute the test case and if so how could that be done ?
Short answer - NO!
Long answer:
what you are looking for is some kind of test runner or CI system - that will connect to a remote computer (or use an API directly) and launch automated test cases based on some parameters.
This brings up so many questions I can't even list all of them here but some of the most important ones:
who/where we store authentication credentials
how are progress and results monitored and reported
when are tests scheduled ? What kind of triggers are supported/desired ?
Kiwi TCMS takes a different approach when dealing with automated tests. You can schedule your tests in any way you like and then report the execution results back to Kiwi TCMS.
We are working on plugins for popular test runners, like JUnit, Python Nose, etc, that will automatically discover the result and name of your automated test cases and report them back to Kiwi TCMS.
If you do need a specific plugin/framework please open a request on GitHub and our team will take it into consideration.
Edit: upvote, comment & follow this feature request at https://github.com/kiwitcms/Kiwi/issues/914
I needed to do something similar: run automation on remote systems and report results back to Kiwi. I put together several components to get the entire system working. Here's what worked for me:
Jenkins to initiate test runs and manage remote machines
A Python script to create test runs against a test plan and write out a custom test run manifest, which is...
Passed to the automation system (.NET/C#) via more scripts to make sure the remote machine is configured correctly
Automation output is directly consumed by Jenkins to report test results for the build/job as well as consumed by another Python script that pushes results back to Kiwi
The automation system knows how to interpret the test run manifest and map test cases to test methods implementing the test. It's important to include the Kiwi caserunid throughout the pipeline so the result is associated with the correct Kiwi entry.
please open feature requests on GitHub if you'd like to get test runner adapters (essentially plugins) for Kiwi TCMS. GitHub is the only place where we can track who needs what and prioritize!

What's the easiest way to setup Protractor e2e tests to run nightly?

Like many angularjs developers I have a test suite of protractor e2e tests. The test suite takes about half an hour to an hour to run. I would like to be able to run the tests nightly using some kind of cloud based setup, if possible. I'm having trouble figuring out how to host and run the protractor tests.
Is there a common cloud setup or some easy setup for running protractor e2e tests either on check-in or for a nightly build?
The easiest way (I don't say the best way) that I'm using currently is setup task scheduler job that is run on remote machine. This Task Scheduler job is triggered in 2AM and run windows batch file with few commands: first one pulls last version from git, second changes directory to where my automated tests can be run and last one is running tests (more precisely the smoke suite):
git pull origin develop
cd C:\arkcase-test\protractor
protractor protractor.chrome.conf.js --suite=smoke
Jenkins job is fine for this you can trigger a mail saying build success or failure after this nightly run .
You can even attach your HTML report in the mail .
How ever this slave PC should be online connected running the jenkins client.

Parametrized job using build pipeline plugin on Jenkins

I've been using build pipeline plugin with Jenkins (v1.534) for a long time now and recently I've tried to create a pipeline with the same job (using different parameters) twice and it seems not possible. It looks like this:
Job A (param env=dev) -> Job B -> Job A (param env=qa)
Is this possible using build pipeline plugin (v1.4)?
You can try the Jenkins FLOW plugin... https://wiki.jenkins-ci.org/display/JENKINS/FLOW+Plugin
I think this is only possible if you have Job B automatically setup to trigger job A again, not manual build step.
Job B will automatically trigger downstream via the parameter trigger job plugin. This works fine if you use the automatic build downstream, but the manual hold feature post build job is not smart enough yet to continue on.
Jenkins also has massive failing in plucking upstream variables into downstream jobs, like if jobs are run out of order on a pipeline.
At my work, I duplicate jobs and chain them Build->Deploy to Dev->Deploy to QA->Deploy....XXX and so forth.

Which tool or framework can be used to run WPF automation tests in Parallel?

What I want is to run my WPF automation tests (integration tests) in the continuous integration process when possible. It means, everytime something is pushed to the source control I want to trigger an event that starts the WPF automation tests. However integration tests are slower than unit tests that is why I would like to execute them in Parallel, in several Virtual Machines. Is there any framework or tools that allows me to run my WPF automation tests in parallel?
We use Jenkins. Our system tests are built on top of a proprietary framework written in C#.
Jenkins allows jobs to be triggered by SCM changes (SVN, Git, and Mercurial are all supported via plugins). It also allows jobs to be run on remote slaves (in parallel, if needed). You do need to configure your jobs and slaves by hand. Configuring jobs can be done with build parameters: say, you have only one job that accepts test id's as parameters, but it can run on several slaves; you can configure one trigger job that will start several test jobs on different slaves passing to them test id's as parameters.
Configuring slaves is made much easier when your slaves are virtual machines. You configure one VM and then copy it (make sure that node-specific information, such as Node Name is not hard-coded and is easily configurable).
The main advantages of Jenkins:
It's free
It has an extendable architecture allowing people to extend it via plugins. As a matter of fact, at this stage (unlike, say, a year and a half ago) everything I need to do can be done either via plugins or Jenkins HTTP API.
It's can be rapidly deployed. Jenkins runs in its own servlet container. You can deploy and start playing with it in less than an hour.
Active community. Check, for example, [jenkins], [hudson], and [jenkins-plugins] tags on SO.
I propose that you give it a try: play with it for a couple of days, chance are you'll like it.
Update: Here's an old answer I've liked recommending Jenkins.

SQL Server Database Management with Continuous Integration

Let's say we have a continuous integration server. When I check in, the post-hook pulls the latest code, runs the tests, packages everything. What is the best way to also automate the database changes?
Ideally, I'd build an installer that could either build a database from scratch or update an existing one using some automated syncing method.
I've recently bumped into an article, that might be of use.
The author explained some of the best continuous integration practices including testing, processing and automation.
Here are some of the key takeaways:
In many shops code is unit tested at the point of commit. For databases, it is preferred running all unit tests at once and in sequence against a QA database, vs development, as a part of the Test step
The test step is a critical part of any CI/CD process. Test scripts, including unit tests themselves, should also be versioned in source control, extracted at the point of the Build step and executed
Pulling data from production is appealing as a quick expedient, but is never a good idea
The best approach is using a tool or script to quickly, repeatedly and reliably create synthetic test data for your transactional tables
Running unit tests to produce manual summary results for human consumption defeats the purpose of automation. We need machine readable results, that can allow an automated process to abort, branch and/or continue.
Running a CI process, which requires 100% of all tests to pass, is akin to not having CI at all, if the workflow pipeline is set up atomically to stop on failure, which it should. To thread the needle, tests should have built in thresholds, that will raise an error based on either the % of tests failing or in some cases, if certain high priority tests fail.
All processes should ultimately produce a Boolean result of pass or fail, but some non-automated processes can easily find their way into your CI workflow pipeline (e.g. unit testing). Software should be plug-n-play into any workflow pipeline, taking known inputs and producing expected outputs – like pass, fail.
CI/CD process should be aborted on failure and a notification email should be immediately sent vs continuing to cycle the pipeline.
The CI process should not cycle again until any errors in the last build are fixed. On failure, the entire team should get the failure notification, including as many details as to what failed as possible.
If a pipeline takes 1 hour, from start to finish, to complete, including all the testing, then all the build intervals should be set to no less than one hour and all new commits should be queued, and applied to the next build.
No plain text passwords should exist in automation scripts
If you have the opportunity to define and control the whole database management and db creation process, have a serious look at DB Ghost - it's more than just a tool - it's a process.
If you like it and can implement it, you'll get great returns on it - but it's a bit of a "all-or-nothing" kind of approach. Recommended.
I would caution against using a db backup as a development artifact, most CI best practices suggest that you manage the schema, procedures, triggers, and views as first class development artifacts. The side effects is that you can take this one step further and use them to build a new database whenever you want, ideally you also have some data that can be pushed into the database.
Here is a cliff notes version to get your feet wet, but there is lots out there in this space:
http://www.infoq.com/news/2008/02/versioning_databases_series
I like some of the ideas that Scott Ambler has here as well, the site is good but the book is surprisingly deep for such a difficult set of problems.
http://www.agiledata.org/
http://www.amazon.com/exec/obidos/ASIN/0321293533/ambysoftinc
Red Gate is a quite robust solution and it works out of the box.
But the best thing is that you can integrate it with your continuous integration process. I use it with Msbuild and Hudson.
quickly explaining how it works:
http://blog.vincentbrouillet.com/post/2011/02/10/Database-schema-synchronisation-with-RedGate
if you need to know more about this, feel free to ask
The Red Gate approach using SQL Source Control and the SQL Compare Pro command line is detailed with code samples here:
http://downloads.red-gate.com/HelpPDF/ContinuousIntegrationForDatabasesUsingRedGateSQLTools.pdf
Troy Hunt wrote an article on Simple Talk entitled "Continuous Integration for SQL Server Databases":
http://www.simple-talk.com/content/article.aspx?article=1247
Have you looked at FluentMigrator? The default download includes Nant scripts that would be easy to add in to a CI. Free, open source and easy to use. Works for a wide variety of databases.
The latest version (5.0) of DB Ghost doesn't suffer from the "non ASCII character" problem (it just means that the file is UTF8 encoded) and it should be able to do exactly what you need.
Also, the tools can actually be used standalone to perform the various functions (scripting, building, comparing, upgrading and packaging) if you want, it's just that using them all together provides a full end-to-end process thus making the overall value greater than the sum of it's parts.
In essence, to make changes to the schema you update individual object creation scripts and per-table insert scripts (for reference data) that are held under source control just like you were developing a “day one” greenfield database. The DB Ghost tools are used to enable the whole thing by building these scripts into a brand new database (using continuous integration if required) and then comparing and upgrading a target database, which can be a copy of the production database. This process produces a delta script which can be used on the real production database during go-live.
You can even produce a Visual Studio database project and add it into any solutions you currently have.
Malc
I know this post is old, but we have a new solution that takes the following approach:
Developers script individual SQL changes and commit them to source
control.
Our program (OneScript) pulls the change script files from
source control, filters and sorts them, and generates a single
release script file.
That release script file is then applied to a
database to do a release.
Our home page here explains this process in more detail and has a link to an example that does these steps automatically from a Subversion hook. So soon after a commit, the developer receives an email saying if the release was successful or had errors. The PowerScript code is included.
Disclaimer -I'm working at the company that makes OneScript.

Resources