Feature branch to become develop branch (like a master branch) - reactjs

Atm, our react project contains code logic (react) + env (nginx). It means that when it is built and deploy, it will always be consistent.
Now devops put a constraint that we have to CD from develop (a bit like a master branch), feature branch is not accepted. This forces us to CI from develop branch and CD from develop branch. As you recall our project couple logic + env
now the branching looks like this:
develop
feature/release-jun
feature/release-july
feature/release-x
feature/release-y
...
so to CI/CD feature code, each time, e.g. I need to PR (feature/release-jun) to develop branch. I need to PR (feature/release-x) to develop which has lots of conflict.
What is the easier way to make feature/release-jun as the develop branch, during the CI/CD process. After the CI/CD process done, I need to switch back to the real develop branch.
And after 2 week, another feature branch will become the develop branch to CI/CD.
I have not done any rebase, so I am a newbie. By the way, devops has not given us permission to overwrite develop branch directly, so to change it, it needs to be PR

Related

ReactJS typical workflow - from development to deployment

My use case is composed of several ReactJs projects, where we collaborate using Git.
we are building a workflow to be used using Git, and that's our current thinking:
Each programmer works locally feching the next branch
They build their own branches but at the end its all merged to next
When all the pending tasks are done, we move create a branch test from next
Once test is fine, it is branched to beta
When stable, branched stable
This is the development phase.
For deployment, our first tought is to "build" the bundle on test, beta and stabe and copy it to the respective servers for running, as we keep built bundles on a normal filesystem (this is how we do it today, keep several bundles for several versions, not using Git)
Our production environment has dozen of servers in different customers, and every time we need to update, we need to copy the respective bundle from the correct directory to the server and install it (all bundles are build with an installation tool).
So, I have 2 doubts here:
a) Is the development workflow a good practice? Any suggestion?
b) How we make the deployment workflow smoother? Should we keep the bundles together in the code on Git ? Should we use something different?
Ideally we would need the server to autoupdate from our command. What would be the correct way to accomplish that ?

Continuous Delivery for multi component Project

In our project we have multiple components developed by separate teams having separate git repos.
All components have commit job and packaging job and publishes the artifacts to artifactory.
The problem comes when we want to deploy all the components as a system.
Since all these components deploys to separate servers and then interact with each other for functioning.. a lot of time inconsistencies arises due to some newer version of a component being deployed to one of the server.
For ex. I have components A,B,C and want to move following versions A1, B1, C1 in the deployment and testing pipeline. How I can ensure that no newer version of a component is deployed to QA environment (servers). I am using Jenkins as my CI/CD tool. It seems I need some integration or lightweight configuration management tool to manage the versioning of my system as a whole comprising of all components which I can promote in the deployment pipeline.
I hope I could describe my question. Suggestions to tackle this situation will be really helpful.
Thanks,
We use this pattern:
for every customer which uses our products there is one "project": It contains nearly no code, just configuration. We use this name scheme: coreapp_customerslug.
the project depends on N applications. The project pins all exact versions of the dependencies.
During CI we do this:
install project P and all the pinned dependencies
Then update all dependencies to their latest version.
Run all tests
If all tests succeed, update the versions of the dependencies and increment the version of the project.
Now the project has a new and stable release.
deploy the new release (at the moment we don't do this automatically, but in the near future).
With this pattern ("project" is an container of the apps) you can handle the version problem. If you have several servers, the update process should be fast, to avoid different versions at the same time.
Update
The CI maintains the pinned versions. We use python and pip and the file requirements.txt gets updated by a script. We use the version schema YYYY.N. N gets incremented if all tests are ok.
Attention: If app1 has latest version N, this does not mean that it works in all projects. If you have two projects: P1 and P2, this can happen: app1 with latest version N works well in project P1, but fails in P2. This means you can't create a new stable version of project P2. Sometimes this is annoying, but this keeps a constant update alive. We always use the latest version of our apps in ours projects.

When using Continuous or Automated Deployment, how do you deploy databases?

I'm looking at implementing Team City and Octopus Deploy for CI and Deployment on demand. However, database deployment is going to be tricky as many are old .net applications with messy databases.
Redgate seems to have a nice plug-in for Team City, but the price will probably be stumbling block
What do you use? I'm happy to execute scripts, but it's the comparison aspect (i.e. what has changed) I'm struggling with.
We utilize a free tool called RoundhousE for handling database changes with our project, and it was rather easy to use it with Octopus Deploy.
We created a new project in our solution called DatabaseMigration, included the RoundhousE exe in the project, a folder where we keep the db change scripts for RoundhousE, and then took advantage of how Octopus can call powershell scripts before, during, and after deployment (PreDeploy.ps1, Deploy.ps1, and PostDeploy.ps1 respectively) and added a Deploy.ps1 to the project as well with the following in it:
$roundhouse_exe_path = ".\rh.exe"
$scripts_dir = ".\Databases\DatabaseName"
$roundhouse_output_dir = ".\output"
if ($OctopusParameters) {
$env = $OctopusParameters["RoundhousE.ENV"]
$db_server = $OctopusParameters["SqlServerInstance"]
$db_name = $OctopusParameters["DatabaseName"]
} else {
$env="LOCAL"
$db_server = ".\SqlExpress"
$db_name = "DatabaseName"
}
&$roundhouse_exe_path -s $db_server -d $db_name -f $scripts_dir --env $env --silent -o > $roundhouse_output_dir
In there you can see where we check for any octopus variables (parameters) that are passed in when Octopus runs the deploy script, otherwise we have some default values we use, and then we simply call the RoundhousE executable.
Then you just need to have that project as part of what gets packaged for Octopus, and then add a step in Octopus to deploy that package and it will execute that as part of each deployment.
We've looked at the RedGate solution and pretty much reached the same conclusion you have, unfortunately it's the cost that is putting us off that route.
The only things I can think of are to generate version controlled DB migration scripts based upon your existing database, and then execute these as part of your build process. If you're looking at .NET projects in future (that don't use a CMS), could potentially consider using entity framework code first migrations.
I remember looking into this a while back, and for me it seems that there's a whole lot of trust you'd have to get put into this sort of process, as auto-deploying to a Development or Testing server isn't so bad, as the data is probably replaceable... But the idea of auto-updating a UAT or Production server might send the willies up the backs of an Operations team, who might be responsible for the database, or at least restoring it if it wasn't quite right.
Having said that, I do think its the way to go, though, as its far too easy to be scared of database deployment scripts, and that's when things get forgotten or missed.
I seem to remember looking at using Red Gate's SQL Compare and SQL Data Compare tools, as (I think) there was a command-line way into it, which would work well with scripted deployment processes, like Team City, CruiseControl.Net, etc.
The risk and complexity comes in more when using relational databases. In a NoSQL database where everything is "document" I guess continuous deployment is not such a concern. Some objects will have the "old" data structure till they are updated via the newly released code. In this situation your code would need to be able to support different data structures potentially. Missing properties or those with a different type should probably be covered in a well written, defensively coded application anyway.
I can see the risk in running scripts against the production database, however the point of CI and Continuous Delivery is that these scripts will be run and tested in other environments first to iron out any "gotchas" :-)
This doesn't reduce the amount of finger crossing and wincing when you actually push the button to deploy though!
Having database deploy automation is a real challenge especially when trying to perform the build once deploy many approach as being done to native application code.
In the build once deploy many, you compile the code and creates binaries and then copy them within the environments. From the database point of view, is the equivalent to generate the scripts once and execute them in all environments. This approach doesn't handle merges from different branches, out-of-process changes (critical fix in production) etc…
What I know works for database deployment automation (disclaimer - I'm working at DBmaestro) as I hear this from my customers is using the build and deploy on demand approach. With this method you build the database delta script as part of the deploy (execute) process. Using base-line aware analysis the solution knows if to generate the deploy script for the change or protect the target and not revert it or pause and allow you to merge changes and resolve the conflict.
Consider a simple solution we have tried successfully at this thread - How to continuously delivery SQL-based app?
Disclaimer - I work at CloudMunch
We using Octopus Deploy and database projects in visual studio solution.
Build agent creates a nuget packages using octopack with a dacpac file and publish profiles inside and pushes it onto NuGet server.
Then release process utilizes the SqlPackage.exe utility to generate the update script for the release environment and adds it as an artifact to the release.
Previously created script executed in the next step with SQLCMD.exe utility.
This separation of create and execute steps gives us a possibility to have a manual step in between, so that someone verifies before the script is executed on Live environment, not to mention, that script saved as an artifact in the release can always be referred to, at any later point.
Would there be a demand I would provide more details and step scripts.

Manage cross-platform projects on Github

I'm looking for a tidy way to manage my cross-platform HTML+JS projects in github.
Here's my typical working process:
I complete developing my app for ios
I start working on Android platform version
I start working on XXXXXXX platform
...
From step 2 and further I come out with:
commits that can be merged in the Head repository
commits that can not be merged, so I have at least 2 versions of some of the files that compose the project
My problem is that forking/branching for each platform force me to duplicate changes on the shared part of the project too. Maybe there's something that I'm missing in both branching and forking.
Which method you use to organize your code on github so as to preserve both the differences and the unity of the project?
It sounds like branches might be the way to go: create an android branch, etc., and if you need to branch those further then create android/branch1, android/branch2 and so forth.
When you need to merge files between branches you might want to use the git cherry-pick command to select the commits to merge. I would also probably do this on a temporary local branch before pushing, to make it easy to recover from screw-ups!

version control/maintaining development local copies and working live copies and databases

This is a subject of common discussion, but through all my research I have not actually found a sound answer to this.
I develop my websites offline, and then launch them live through my hosting account.
I utilize codeigniter, and on that basis there are some fundamental differences between my offline and online copies, namely base urls and database configurations. As such I cannot simply develop and test my websites offline and then upload them as it requires small configuration changes which are easy to overlook and good lead to a none working live website.
The other factor is that when I am developing offline, I might add a database table or a column whilst creating some functionality. When I upload my local developments to my host, they often do not work as I have forgotten to upload the new database structure. Obviously this cannot happen - there cannot be any opportunity for a damaged or broken live website.
Further to this, I'd like to be able to have logs of my development - version control of sorts such that if i develop a feature, and then something else stops working I can easily look backwards to at least see the code changes which could have caused the change.
My fourth requirement is as follows: if i go away on holiday for a week without my development laptop, and then get a bug report, I have no way of fixing it. If i fix it on the live copy, not only is it dangerous, but i'll inevitably not update it on my local copy - as such when i update my live copy next time, that change will be lost. Is there a way that on any computer i can access my development setup, edit and test, launch to the live site, whilst also committing it such that my laptop local copy is up to date.
So yes.. in general im looking for a solution to make my development processes more efficient/suitable. Any ideas?
Thanks
Don't deploy by simply copying. Deploy by using a script (I use Apache Ant) that will automate the copy of specific files for each environment, the replacement of some values, etc.
This just needs rigor. Make a todo list while developing, and check that every modification on the server is done. You might also test the deploy procedure on a pre-production server which has an similar configuration as the production server, make sure everything is OK, and then apply the same, tested procedure on the production server
Just use a version control system. SVN or Git are two free candidates.
Make your version control server available from anywhere. If it's an open-source project, free hosting solutions exist. Of course, if you don't have a development computer wvailable, you'll have to checkout the whole project, and probably install some tools to be able to develop, test and deploy. Just try to make it as easy as possible, or always have your laptop available. If you plan to work, have your toolbox with you. If you don't plan to work, then don't work. When you have finished some development, commit to the server. When you go back to your laptop, update your working copy from the server.
Small additions and clarifications to JB
Use any VCS, which can work (in a good way) with branches - your local and prod systems are good candidates for separate branches, where you share common code but have branch-specific config. It'll require some changes in your everyday workflow (code in "test", merge finished with "prod", deploy /by tools, not hand/ only after merge...), but it's fair price
Changing of workflow, again. As JB noted - don't deploy by hand, don't deploy wrong branch, don't deploy "prod" before finished merge. But now build-tools are rather smart, you can check such pre-condition inside builder
Just use VCS, maybe DVCS will be somehow better. I say strong "No-no" for Git as first VCS, but you have wide choice even without it - SVN (poor branch|merge comparing to DVCS), Bazaar (not a tool of my dream, but, who knows), Mercurial, Fossil SCM, Monotone
Don't work on live, never do anyting outside your SCM. One source of changes is a rule of happy developer. Or don't work at all at free-time, or have codebase always reacheable for you (free code-hosting /GoogleCode, SourceForge, BitBucket, Github, Assembla, LaunchPad/ or own server), get it as needed, change, save, deploy

Resources