Do organisations have a pipeline per environment or should one CI/CD pipeline deliver to dev, qa and prod? - continuous-deployment

Do organisations have a pipeline per environment or should one CI/CD pipeline deliver to dev, qa and prod?
I'm trying to understand if it's beneficial to have a continous deployment pipeline with jenkins that delivers to dev, tests it, if it passes it deploys to production.

IMHO one pipeline is enough.
you can minimize the environment issues by running the same test in 3 different environments (dev, qa and prod).
Jenkins implementation can be as having 3 different jobs for each environment, But Always deploy the builds and tests in order, as dev tested -> qa tested -> prod.
Ideally if the release build in prod is 1.0.0, then qa must have the one build upper, ie 1.0.1, and dev might have 1.0.2 and higher.

Valid question. I've seen and worked in environments with both. Neither is better, neither is more superior. It just depends on the need and SLAs.
If you need separation, for instance, you want the devs to have full control over the dev environments but not uat or prod, then multiple pipelines become easier than figuring out who has access to push what past what stage.
If you've got a small team and everyone knows everyone and you don't need to restrict anything then you can do one pipeline and then restrict it later as it grows.

Related

ReactJS typical workflow - from development to deployment

My use case is composed of several ReactJs projects, where we collaborate using Git.
we are building a workflow to be used using Git, and that's our current thinking:
Each programmer works locally feching the next branch
They build their own branches but at the end its all merged to next
When all the pending tasks are done, we move create a branch test from next
Once test is fine, it is branched to beta
When stable, branched stable
This is the development phase.
For deployment, our first tought is to "build" the bundle on test, beta and stabe and copy it to the respective servers for running, as we keep built bundles on a normal filesystem (this is how we do it today, keep several bundles for several versions, not using Git)
Our production environment has dozen of servers in different customers, and every time we need to update, we need to copy the respective bundle from the correct directory to the server and install it (all bundles are build with an installation tool).
So, I have 2 doubts here:
a) Is the development workflow a good practice? Any suggestion?
b) How we make the deployment workflow smoother? Should we keep the bundles together in the code on Git ? Should we use something different?
Ideally we would need the server to autoupdate from our command. What would be the correct way to accomplish that ?

Octopus Deploy Prevent a Package from Deploying to another environment

I am working with Octopus deployment tool. We have a situation where we should not promote the binaries from DEV to QA. This is due to the reason where some features are still in development. We have another branch MAIN from where all the feature will be released. From here we will be generating build and deploying to QA and PROD.
In order to keep the build environment intact, we need to build and deploy only to DEV and should not be promoted.
I thought of creating a separate project specifically for DEV environment.
Before proceeding with this approach, I wanted to know if there any other better solution.
Raaj
You could create a separate lifecycle that has only the DEV environment in it to prevent it from being promoted. Octopus has a feature called channels which allows you to create releases that are only able to be deployed to the defined environments within that unique lifecycle.
https://octopus.com/docs/deployment-process/channels

Deploying Create-React-App applications into different environments

I've got an app that I'm working on that consists of a Frontend - written using CRA - and a Backend - written in Java. We would like to support deploying this in multiple environments throughout our process, namely:
Local development machine - This is easy. Just run "npm start" and use the "development" environment
Local End-to-end tests - these build an infrastructure in Docker consisting of the Frontend, Backend and Database, and runs all of the tests against that
Development CI system
QA CI System
Pre-production System
Production System
The problem that I've got is - in the myriad of different systems, what is the best way for the Frontend to know where the Backend is? I'm also hoping that the Frontend can be deployed onto a CDN, so if it can be static files with the minimal complexity that would be preferred.
I've got the first two of these working so far - and they work by the use of the .env files and having a hard-coded hostname to call.
Once I get into the third and beyond - which is my next job - this falls down, because in each of the cases there is a different hostname to be calling, but with the same output of npm run build doing the work.
I've seen suggestions of:
Hard-code every option and determine it based on the current browser location in a big switch statement. That just scares me - not only do I have to maintain my deployment properties inside my source code, but the Production source will know the internal hostnames, which is arguably a security risk.
Run different builds with provided environment variables - e.g. REACT_APP_BACKEND=http://localhost:8080/backend npm run build. This will work, but then the code going into QA, PreProd and Prod are actually different and could arguably invalidate testing.
Adding another Javascript file that gets deployed separately onto the server and is loaded from index.html. That again means that the different test environments are technically different code.
Use DNS Tricks so that the hostnames are the same everywhere. That's just a mess.
Serve the files from an active web server, and have it dynamically write the URL out somehow. That would work, but we're hoping that the Production one can just be a simple CDN that just serves up static files.
So - what is the best way to manage this?
Cheers

How can docker help software automation testers?

How can docker help automation testers?
I know it provides linux containers which is similar to virtual machines but how can I use those containers in software automation testing.
Short answer
You can use Docker to easily create an isolated, reproducible and portable environment for testing. Every dependency goes to an image and whenever you need an environment to test your application you just run some images.
Long answer
Applications have a lot of dependencies
A typical application has a lot of dependencies to other system. You might have a database, a LDAP, a Memcache or a many more things your system depends on. The application itself needs a certain run time (Java, Python, Ruby) in a dedicated version (Java 7 or Java 8). You might also need a server (Tomcat, Jetty, NGINX) with settings for your application. You might need a special folder structure for your application and so on.
Setting up an test environment becomes complicated
All this things make up the environment you need for your application. You need this environment to run your application in production, to develop it and to test it (manual or automated). This environment can become quite complicated and maintaining it will cost you a lot of time and trouble.
Dependencies become images
This is where Docker comes into play: Docker let's you put your database (with the initial data of your application already set up) to a Docker image. The same goes for your LDAP, your Memcache and all other applications you depend on. Docker let's you even package your own application into an image which provides the correct run time, server, folder structure and configuration.
Images make your environment easily reproducible
Those images are self-contained, isolated and portable. This means you can pull them on every machine and just run them as they are. Instead of installing a database, LDAP, Memcache and configure all of them you just pull the images and run them. This makes it super easy to spin up a new and fresh environment in seconds whenever you need.
Testing becomes easier
And that's the basic for your tests, because you would need a clean, fresh and reproducible environment to perform tests against. Especially "reproducible" and "fresh" is important. If you run automated tests (locally on the developer maschine or on your build server) you must use the same environment. Otherwise your tests are not reliable. Fresh is important because it means you can just stop all containers when your tests are finished and every data mess your tests created is gone. When you run your tests again you just spin up a new enviroment which is clean and in its initial state.

Boilerplate for a professional PHP team development environment

I would like to get a general consensus for the minimal/boilerplate professional PHP team development environment. I can not find this information anywhere on the web. In the open-source world there are so many choices and so many ways to do things but I've yet to find any common best-practice for the infrastructure/plumbing side of things.
Consider a small shop with a team of 5-10 developers/designers, doing LAMP CRUD apps.
They need to manage development, staging and production builds. They want quality software and they can't be stepping on each others toes trying to get things done. Deployment needs to be easy and fast. Sometimes there will be hotfixes. Rolling back production server to a previous version needs to be just as fast.
Things to consider are:
Source code management (SVN, git,
Hg)
Database schema/data continuous
integration, tied to Source-code
revision. This is one I'm
particularly interested in.
Individual development environments
(e.g. each developer has a VMware
instance of the development
environment to tinker with (DB
server, web server, code, data,
etc))
Managing central development,
staging and production builds
Production deployment (e.g. tar
balls, .rpm/.deb)
Automated testing (e.g. SVN commit
hooks, nightly cron tests for slower
tests)
Team communication (bug tracking, internal documentation,
irc/im, etc)
I've left this open to edit by the community so feel free to edit/add. Ideally someone can visit this page and a few hours later have the foundations in place for their team to start developing.
I'll start. feel free to edit and improve this
This is for a ficticious product called: dundermifflin.com
Setup a development virtual machine running the same software you plan on using in production: e.g. Ubuntu with PostgreSQL, Apache and PHP5.
Each developer runs their own copy of this VM with the hostname set to their username, (e.g. phpguy.dundermifflin.com)
Setup a central staging server (same as the development VM). This is staging.dundermifflin.com.
Setup a central Subversion server with a new repository for dundermifflin.com. This is devel.dundermifflin.com.
4a. Add post-commit hook to run tests for "trunk" commits
4b. Add post-commit hook to package/deploy to staging server for commits tagged "staging"
4c. Add post-commit hook to package/deploy to production server for commits tagged "release"
This method does not address database continuous integration which means rolling back SVN to a previous revision will break the build unless your database is very static. Suggestions?
Use Bugzilla on the central Subversion server (devel.dundermifflin.com) for bug tracking.
Write a shell script to run PHPUnit/SimpleTest tests (to be called by item 4a).
For continuous integration, linked with your version control system, and automated unit testing I find this article very interesting:
Continuous builds with CruiseControl, Ant and PHPUnit

Resources