How to setup dynamic svn branches - a deviation from normal strategies - branching-and-merging

I have a not so normal requirement -
I have an SVN project with one trunk; one dev branch & may feature branches. Every team member pulls out feature branch(not in any particular sequence) from dev and start coding and we all do in parallel.
Suppose i have 10 requirements so 10 feature branches (1,2,..., 10) and all are worked upon in parallel. After coding we merge all these 10 branches in dev branch and deploy in UAT where dev branch is update with all code. But when comes to production movement, there are cases when only some of say 4 (1, 5 , 9 , 10) features need to be promoted so we have to always retrofit code back from trunk, cherry pick changes from dev branch and merge it into trunk. But this is very resourceful process & highly error prone because we need to get latest code from trunk and merge selective changes in it and again have to push for another UAT.
Is there a way i can pick any random feature branch and merge into dev and finally into trunk in one go without any retrofitting and saving multiple rounds of UAT?

So we got to devise a strategy after some delibrations.
Enable a new prod equivalent environment - preprod/or os
New preprod will always be equivalent to prod in terms of code/config
Let users signoff eveyrhitng on UAT only
Before any prod promotions, users will have to confirm on features to be promoted to prod well in-advance
Once confirmed we will cherrypick/merge these changes into release or trunk and push to preprod
Ask users to confirm on preprod
Finally push trunk/release to prod

Related

Feature branch to become develop branch (like a master branch)

Atm, our react project contains code logic (react) + env (nginx). It means that when it is built and deploy, it will always be consistent.
Now devops put a constraint that we have to CD from develop (a bit like a master branch), feature branch is not accepted. This forces us to CI from develop branch and CD from develop branch. As you recall our project couple logic + env
now the branching looks like this:
develop
feature/release-jun
feature/release-july
feature/release-x
feature/release-y
...
so to CI/CD feature code, each time, e.g. I need to PR (feature/release-jun) to develop branch. I need to PR (feature/release-x) to develop which has lots of conflict.
What is the easier way to make feature/release-jun as the develop branch, during the CI/CD process. After the CI/CD process done, I need to switch back to the real develop branch.
And after 2 week, another feature branch will become the develop branch to CI/CD.
I have not done any rebase, so I am a newbie. By the way, devops has not given us permission to overwrite develop branch directly, so to change it, it needs to be PR

Do organisations have a pipeline per environment or should one CI/CD pipeline deliver to dev, qa and prod?

Do organisations have a pipeline per environment or should one CI/CD pipeline deliver to dev, qa and prod?
I'm trying to understand if it's beneficial to have a continous deployment pipeline with jenkins that delivers to dev, tests it, if it passes it deploys to production.
IMHO one pipeline is enough.
you can minimize the environment issues by running the same test in 3 different environments (dev, qa and prod).
Jenkins implementation can be as having 3 different jobs for each environment, But Always deploy the builds and tests in order, as dev tested -> qa tested -> prod.
Ideally if the release build in prod is 1.0.0, then qa must have the one build upper, ie 1.0.1, and dev might have 1.0.2 and higher.
Valid question. I've seen and worked in environments with both. Neither is better, neither is more superior. It just depends on the need and SLAs.
If you need separation, for instance, you want the devs to have full control over the dev environments but not uat or prod, then multiple pipelines become easier than figuring out who has access to push what past what stage.
If you've got a small team and everyone knows everyone and you don't need to restrict anything then you can do one pipeline and then restrict it later as it grows.

How am I supposed to manage db revisions alongside codebase revisions?

We have a Rails app with a PostgreSQL database. We use git for version control.
We're only two developers on the project, so we both have to do a little of everything, and when emergencies arise we often have to drop everything to address them.
We have a main branch (called staging just to be difficult 🌚) which we only use directly for quick fixes, minor copy changes, etc. For bigger features, we work on independent feature branches.
When I work on a feature that requires changes to the database, I naturally have to create migrations that alter the schema. Let's say I'm working on feature-emoji, and I create a migration 20150706101741_add_emoji_to_users.rb. I run rake db:migrate to get on with my work.
Later, I'm informed of some bug I need to address. I switch to staging to start work on it; however, now my app will misbehave because the db schema does not match what the app expects. So before doing git checkout staging, I have to remember to do rake db:rollback. And then later when I switch back to feature-emoji, I have to run rake db:migrate again.
This whole flow is sort of okay-ish when dealing with just two branches, but when the git rebases and git merges happen, it gets complicated.
Is there no better way to handle versioning of code and db in parallel? Or am I doomed to have to run annoying rake tasks every time I want to change branches?
There is no easy answer to this. You could perhaps set up something like a git hook to check for changes to schema.rb, and fail the checkout if any are present; but there are lots of edge cases to check for in such a setup.
Ultimately, the responsibility lies with the human developer to restore untracked parts of their environment — e.g. the database — to a clean state before switching branches.

version control/maintaining development local copies and working live copies and databases

This is a subject of common discussion, but through all my research I have not actually found a sound answer to this.
I develop my websites offline, and then launch them live through my hosting account.
I utilize codeigniter, and on that basis there are some fundamental differences between my offline and online copies, namely base urls and database configurations. As such I cannot simply develop and test my websites offline and then upload them as it requires small configuration changes which are easy to overlook and good lead to a none working live website.
The other factor is that when I am developing offline, I might add a database table or a column whilst creating some functionality. When I upload my local developments to my host, they often do not work as I have forgotten to upload the new database structure. Obviously this cannot happen - there cannot be any opportunity for a damaged or broken live website.
Further to this, I'd like to be able to have logs of my development - version control of sorts such that if i develop a feature, and then something else stops working I can easily look backwards to at least see the code changes which could have caused the change.
My fourth requirement is as follows: if i go away on holiday for a week without my development laptop, and then get a bug report, I have no way of fixing it. If i fix it on the live copy, not only is it dangerous, but i'll inevitably not update it on my local copy - as such when i update my live copy next time, that change will be lost. Is there a way that on any computer i can access my development setup, edit and test, launch to the live site, whilst also committing it such that my laptop local copy is up to date.
So yes.. in general im looking for a solution to make my development processes more efficient/suitable. Any ideas?
Thanks
Don't deploy by simply copying. Deploy by using a script (I use Apache Ant) that will automate the copy of specific files for each environment, the replacement of some values, etc.
This just needs rigor. Make a todo list while developing, and check that every modification on the server is done. You might also test the deploy procedure on a pre-production server which has an similar configuration as the production server, make sure everything is OK, and then apply the same, tested procedure on the production server
Just use a version control system. SVN or Git are two free candidates.
Make your version control server available from anywhere. If it's an open-source project, free hosting solutions exist. Of course, if you don't have a development computer wvailable, you'll have to checkout the whole project, and probably install some tools to be able to develop, test and deploy. Just try to make it as easy as possible, or always have your laptop available. If you plan to work, have your toolbox with you. If you don't plan to work, then don't work. When you have finished some development, commit to the server. When you go back to your laptop, update your working copy from the server.
Small additions and clarifications to JB
Use any VCS, which can work (in a good way) with branches - your local and prod systems are good candidates for separate branches, where you share common code but have branch-specific config. It'll require some changes in your everyday workflow (code in "test", merge finished with "prod", deploy /by tools, not hand/ only after merge...), but it's fair price
Changing of workflow, again. As JB noted - don't deploy by hand, don't deploy wrong branch, don't deploy "prod" before finished merge. But now build-tools are rather smart, you can check such pre-condition inside builder
Just use VCS, maybe DVCS will be somehow better. I say strong "No-no" for Git as first VCS, but you have wide choice even without it - SVN (poor branch|merge comparing to DVCS), Bazaar (not a tool of my dream, but, who knows), Mercurial, Fossil SCM, Monotone
Don't work on live, never do anyting outside your SCM. One source of changes is a rule of happy developer. Or don't work at all at free-time, or have codebase always reacheable for you (free code-hosting /GoogleCode, SourceForge, BitBucket, Github, Assembla, LaunchPad/ or own server), get it as needed, change, save, deploy

Delivering hot fixes using Flyway

Let's consider the What is the best strategy for dealing with hot fixes? question from the Flyway FAQ section. In this question:
Application version 7 (and DB version 7) is deployed in production
Work starts on app version 8
DB version 8 is developed and deployed in the acceptance test environment
Bug is identified in production
DB version 7.1 is developed and must be acceptance-tested
When flyway:migrate will be invoked against the acceptance test environment, it will notice that v8 has already been executed and so that there is no need to execute v7.1.
On one side it makes sense since v7.1 might not be compatible with v8, and it is not up to Flyway to analyze this. Fail-fast is entirely understandable.
On the other side, the only way to deploy v7.1 to the acceptance test environment is to clean the database and run flyway:migrate with target = v7.1, thereby discarding data that might have had its use.
Is there a feature I'm not aware of that handles this case or is clean + migrate.target=v7.1 the only option?
More than a different feature, it's about a different process.
If you do wish to keep your data in your acceptance environment, I would recommend shipping v8 of the database with the hotfix and the actual change can then be v8.1. The features of the v8 schema might remain unused until the corresponding code gets deployed. In most cases however, this causes no harm.

Resources