Choosing best branching model for common framework based different applications development - branching-and-merging

I was reading many articles about version control systems like SVN, Git and various branching models (feature based, release based and others) but none of them did not seem to fit our project requirements.
We (team) are going to develop a framework, which will be used as core for different applications. So, there will be one framework and more than one different applications built on that framework. Each application will have usual project cycle: builds, releases... Framework itself won't be released but may have tagged different versions. During the development of application, we want to commit some common features to the framework (if we see that feature is great and future applications should have it).
So each application is like a separate branch of framework, but it will never be fully merged back (because it's a separate application) and there is need do some commits to framework (trunk). Some online articles such commits (without merging whole branch to trunk) gives as negative examples, so we are confused.
What version control system and branching model do you recommend for such development cycle?

So each application is like a separate branch of framework, but it
will never be fully merged back (because it's a separate application)
and there is need do some commits to framework (trunk). Some online
articles such commits (without merging whole branch to trunk) gives as
negative examples, so we are confused.
This part scares me a bit. If you are going to have a framework, then you need to take care of it like any other lump of code, and you don't want multiple versions running around for any reason except maintenance of existing releases or work on future releases. So each of your "application" projects can have a branch where they modify the framework as required for the application, but I recommend the framework trunk be updated often so that it evolves in a way that best serves the needs of all of your applications. In general, when branching for code going forward, you want to sync up with the master and put code back into the master as quickly as possible to avoid lots of work handling merges and also give others the benefit of the work.
You should put your framework in a separate area (or repository if you are using a DVCS like git or hg) so that it's distinct and may have its own release cycle if necessary.
The DVCSs are all the rage these days, git and hg being the most popular, so you should look into them. They have different ways of handling branching. Their power lies in the fact that there is no centralized repository so it's more flexible and reliable for larger teams.

Related

LaunchDarkly: multi-platform feature flagging and branching questions

Looking at LaunchDarkly for feature flagging across our enterprise apps.
Two questions:
1) I'm concerned about being able to effectively flag features across our Java back end and React front ends (2 of them). What are some strategies that people use to define features appropriately so that they are easy to manage across multiple applications/platforms?
2) Have you replaced most/all of your git / Bitbucket / ?? branching workflow with feature flags and purely trunk - based development? If not, have you made significant changes to your existing git / Bitbucket branching strategy?
Disclamer: I work at DevCycle
I'm a few years late, but, I really wanted to make sure anyone finding their way to this question has a little more information.
1) While levlaz provided an answer explaining that you should put your management as far up the stack as possible, I don't necessarily agree that this is the best approach to consider first.
Consider this: A simple setup of a single feature across multiple platforms
Within DevCycle (and others), when you create a Feature, it is available across all platforms and the API.
You simply request the features for a user on any platform, and if the user qualifies, you'll receive it. There is no extra setup necessary to enable it on various platforms.
This means that if a feature is meant to be accessed on either your Java backend or React frontend, you can be guaranteed that the feature will be available at the correct times for the correct user/service regardless of where you call it from.
In short: one single Feature is managed across all platforms in one spot, with one toggle (if desired).
Another Approach: A single feature across multiple platform with different toggles or use cases.
You could very easily simply create multiple flags for each individual platform a feature is meant to be available on, and manage each individually. However, this isn't entirely necessary!
Within a feature setup, you can simply have two separate rules defining different variations being delivered to each platform. For example, you can set up a simple rule which ensures that Java will receive the feature, but React would not.
A unique DevCycle approach: Managing multiple platforms independently.
Here is something DevCycle offers that would may handle your use case in a unique way:
Imagine every single time you create a feature, both a Java and React version of that feature are created.
These platforms would be managed separately within each feature, meaning that there is no potential of any accidental feature data bleeding between platforms in event that a feature doesn't exist on one platform but it does on another.
You can set up each platform as an entirely separate entity, meaning they would use different SDK keys, and all targeting will always be separate.
In the example above for example, the feature would be entirely disabled and not available in any Java SDKs calling out to DevCycle, but it would be available in React.
tl;dr
It's up to you how you want to manage things across platforms. DevCycle makes it easy to do this however you'd like: have all features across all platforms, splitting up your platforms, or just choosing to target differently depending on the feature.
2) Like levlaz said, that is the ideal, but you'll likely never want to achieve fully trunk-based nirvana, as there are a lot of use cases for having various environments and paths for your team to take in various scenarios.
That said, we've seen a lot of folks successfully get REALLY close by using Feature Flags.
I wouldn't suggest removing your build pipelines and CI/CD in favor of feature flags, instead, feature flags enhance those.
For example, with feature flags, you can remove the concept of feature branches and large feature pull requests. Instead, ensure that everything that ever gets put into production is always behind a feature flag. To ensure this happens, you can use workflow tools like github actionsthat do these safety checks for you. With these guards in place, you should now always be able to simply push through to prod without any concerns and run your deploy scripts on each merge. Then you can just target your internal / QA users for testing, and not worry about things hitting prod users!
You may still want to have some sort of disaster recovery environment and local environments, so never truly hitting a pure trunk, but you can get close!
[Disclamer: I work at LaunchDarkly]
For your first question, my general recommendation is to put flags as "high up on the stack" as possible. At the end of the day, you are making a decision somewhere. Where you put that decision point is entirely up to you. Within LaunchDarkly the flags are agnostic to the implementation so a single flag can live on the server, mobile, and client-side without any issues. Keep things simple.
For your second question, in practice, it is very rare to see teams fully make the switch to trunk-based development. This is the goal of 99% of the teams that I work with but depending on if you have a greenfield or a brownfield project the complexity of making the switch can be not worth the effort.
Lastly, Our CTO wrote a book this year called "Effective Feature Management"[1]. If you have not heard of this, I would recommend you take a look. I think you'll find some great insights there.
https://launchdarkly.com/effective-feature-management-ebook/

How to implement continuous delivery on a platform consisting of multiple applications which all depends on one database and each other?

We are working on old project which consists of multiple applications which all use the same database and strongly depend on each other. Because of the size of the project, we can't refactor the code so they all use the API as a single database source. The platform contains the following applications:
Website
Admin / CMS
API
Cronjobs
Right now we want to start implementing a CI/CD pipeline using Gitlab. We are currently experiencing problems, because we can't update the database for the deployment of one application without breaking all other applications (unless we deploy all applications).
I was thinking about a solution where one pipeline triggers all other pipelines. Every pipeline will execute all newly added database migrations and will test if the pipeline is still working like it should. If all pipelines succeeds, the deployment of all applications will be started.
I'm doubting if this is a good solution, because this change will only increase the already high coupling between our applications. Does anybody know a better solution how to implement CI/CD for our platform?
You have to stop thinking about these as separate applications. You have a monolith with multiple modules, but until they can be decoupled, they are all one application and will have to deployed as such.
Fighting this by pretending they aren't is likely a waste of time, your efforts would be better spent actually decoupling these systems.
There are likely a lot of solutions, but one that I've done in the past is create a separate repository for the CI/CD of the entire system.
Each individual repo builds that component, and then you can create tags as they are released or ready for CI at a system level.
The separate CI/CD repo pulls in the appropriate tags for each item and runs CI/CD against all of them as one unit. This allows you to specify which tag for each repo you want to specify, which should prevent this pipeline from failing when changes are made on the individual components.
Ask yourself why these "distinct applications" are using "one and the same database". Is that because every single one of all of those "distinct applications" all deal with "one and the same business semantics" ? If so, as Rob already stated, then you simply have one single application (and on top of that, there will be no decoupling precisely because your business semantics are singular/atomic/...).
Or are there discernable portions in the db structure such that a highly accurate mapping could be identified saying "this component uses that portion" etc. etc. ? In that case what is it that causes you to say stuff like "can't update the database for the deployment of ..." ??? (BTW "update the database" is not the same thing as "restructure the database". Please, please, please be precise.) The answer to that will identify what you've got to tackle.

Migrating a branching strategy from ClearCase to TFS 2010

I am in an "internal" IT shop and we currently use ClearCase for version management. Our branching strategy is common for this with the main branch being reserved for live code and branching off main for project and hotfix type activities. Each project (and they overlap often) has a branch off main, we don't have multitiered branching.
We get the situation were we have to do merging between integration branches so that the release 4 branch picks up all of the release 3 changes (for example) before release 3 goes live and thus is baselined. And the number of times that a hotfix happens when a project is high and has to be supported.
However, this isn't really going to be possible within the TFS world as we don't want to have to drop to the command line to do baseless merging, however we need to have a highly flexible branching capability - something we have got really used to with ClearCase.
So ideally we want TFS branches to allow us to have a production baseline, to be able to branch off to do short term hotfixes, to be able to branch off to do projects - without actually knowing which of the branches will go live (and thus baselined) first. Having worked through all of the MS documents they all appear to be focused on product type environments - but we are mostly a support and enhancement shop.
I'm looking for recommendations/pointers - I've been a ClearCase admin and can quite happily juggle with branching mentally - but everything I come up with just doesn't look like it will fit with TFS - but this is most probably because my mental process is ClearCase-like and isn't in tune with TFS (yet!)
I haven't much experienced with TFS2010, but considering branches are now first class citizen with TFS2010, one practical solution would be to consider your enhancement as a "product" and create a patch branch accordingly.
I suppose you have read the TFS2010 Branching Guide.
It does include a branching scenario for addressing hot-fixes issues.
(from the "TFS Branching Guide - Scenarios 2010_20100330.pdf" document)

1 cakePHP core, multiple applications on different servers

I wondered if this would be possible. I'd like to centralize my cakePHP core files at 1 location, and want my several applications to use the same core. One reason is when updating I just need to update one core. Nowadays I always upload the whole cakephp package with each application.
But my applications are not all on the same server.
Unfortunately I'm not sure webservers can access files across physical servers; and even if it could via network shares, this would be an incredible performance hit.
Rather try to automate the core deploy using SVN or RSync tools.
SO, while it may technically be possible, I wouldn't advise it.
If your apps are at different servers than your cake core, you’ll need at least all servers to be in the same network so you can mount one server’s disk from the other one. Otherwise, you’ll need to upload the core into each app.
Assuming you can mount the disks, you can use the same cake core just replacing the paths in app/webroot/index.php
I wouldn't suggest to do either, but I would add also that updating core is good, but not always good. Especially if your applications are on a live servers (users are working on them).
I have bad experience in the past while doing such upgrades and there are several places in the application where some parts of the code were deprecated and the application stopped working. (I am speaking for 1.2 to 1.3 migrations), so my philosophy is: if you start with one version of the framework keep it the same, unless there is something critical which upper version will improve or fix.
I am not saying it's bad to upgrade, but be careful.
I'd always advise to keep the core up-to-date if possible, but each update needs to be well tested before being deployed. Even 1.3.x point updates can cause things to break in edge cases. As such, I'll keep an app updated if it's still under active development, but not necessarily if it's frozen in production.
Tying several different apps to the same core means you need to test all apps together when upgrading just one core. Work quickly multiplies this way. This is especially annoying if you depend on a bug fix in a newer core release, but this new release introduces some obscure problem in another app.
In the end each app is specifically written for a specific version of the Cake core. In theory the API should not change between point releases and things should just keep humming along when upgrading the core, but in practice that's not always how it works. As such, keep each app bundled with a core that it's tested and proven to work with. The extra hard disk space this requires is really no problem these days.

In a distributed architecture, why is it difficult to manage versions?

I see this time and time again. The UAT test manager wants the new build to be ready to test by Friday. The one of the first questions asked, in the pre-testing meeting is, "what version will I be testing, against?" (which is a fair question to ask). The room goes silent, then someone will come back with, "All the assemblies have their own version, just right-click and look at the properties...".
From the testing managers point-of-view, this is no use. They want a version/label/tag across everything that tells them what they are working on. They want this information easily avaialble.
I have seen solutions where the version of diffierent areas of a system being stored in a datastore, then shown on the main application's about box. Problem is, this needs to be maintained.
What solutions have you seen that gets around this on going problem?
EDIT. The distributed system covers VB6, Classic ASP, VB.Net, C#, Web Services (accross departments, so which version are we using ?), SQL Server 2005.
I think the problem is that you and your testing manager are speaking of two different things. Assembly versions are great for assemblies, but your test manager is speaking of a higher-level version, a "system version", if you will. At least that's my read of your post.
What you have to do in such situations is map all of your different component assemblies into a system version. You say something along the lines of "Version 1.5 of the system is composed of Foo.Bar.dll v1.4.6 and Baz.Qux.dll v2.6.7 and (etc.)". Hell, in a distributed system, you may want different versions for each of your services, which may in and of themselves, be composed of different versions of .dlls. You might say, for example: "Version 1.5 of the system is composed of the Foo service v1.3, which is composed of Foo.dll v1.9.3 and Bar.dll v1.6.9, and the Bar service v1.9, which is composed of Baz.dll v1.8.2 and Qux.dll v1.5.2 and (etc.)".
Doing stuff like this is typically the job of the software architect and/or build manager in your organization.
There are a number of tools that you can use to handle this issue that have nothing to do with your language of choice. My personal favorite is currently Jira, which, in addition to bug tracking, has great product versioning and roadmapping support.
Might want to have a look at this page that explains some ways to integrate consistent versioning into your build process.
There are a number of different things that contribute to the problem. Off of the top of my head, here's one:
One of the benefits of a distributed architecture is that we gain huge potential for re-use by creating services and publishing their interfaces in some form or another. What that then means is that releases of a client application are not necessarily closely synchronized with releases of the underlying services. So, a new version of a business application may be released that uses the same old reliable service it's been using for a year. How shall we then apply a single release tag in this case?
Nevertheless, it's a fair question, but one that requires a non-trivial answer to be meaningful.
Not using build based version numbering for anything but internal references. When the UAT manager asks the question you say "Friday's*".
The only trick then is to make sure labelling happens reliably in your source control.
* insert appropriate datestamp/label here
We use .NET and Subversion. All of our application assemblies share a version number, which is derived from a manually updated major and minor revision numbers and the Subversion revision number (<major>.<minor>.<revision>). We have a prebuild task that updates this version number in a shared AssemblyVersionInfo.vb file. Then when testers ask for the version number, we can either give them the full 3-part number or just the subversion revision. The libraries we consume aren't changing or the change is not relevant to the tester.

Resources