LaunchDarkly 1 Project 1 Environment Usage - launchdarkly

Has anybody using LaunchDarkly uses the following setup to refer to the different App1 environment?
ProjectName: App1-Dev, EnvironmentName: app1envdev
ProjectName: App1-QA, EnvironmentName: app1envqa
ProjectName: App1-Prod, EnvironmentName: app1envprod
I understand that you can have multiple environment (dev, qa, prod) within a project but creating feature flag within a projects creates the flag across the environments and this behavior is also true when archiving the flag which has unintended consequence in prod environment when the current App1 release in prod still requires the archived flag. I'm aware that you can turn off the flag first for each release environment (dev, qa, prod) that the App1 is being release to but this would require another App1 iteration release to finally archive the flag.
I'm looking into automating the process of creating the project and creating/archiving the flag as part of the CI/CD pipeline and with the 1 Project 1 Environment setup, it ensures that creating/archiving of the flag will only affect the environment that the App1 is being release to.
Do you have other approach to achieve my goal? Thank you.
Cheers,
Dennis

Related

In the twelve factor app method, what does it mean to combine the build with the deploy's config?

I'm finding inspiration from the twelve-factor app approach to organize the deployment process of small applications. I'm stuck on the release stage on guidelines that sound contradictory.
The twelve factor app says that the config should not be stored in files but in the environment, like environment variables. (I imagine that files sitting somewhere on the host can also serve as "config stored in the environment", such as a ssh private key in .ssh/private_key that will give access to some protected resource through ssh.)
I thus imagine just setting up my various hosts manually by setting those environment variables by hand (or in .bashrc or similar so I don't have to do it again every time they reboot). I only usually have 2 hosts: my laptop for development and a server for showing my work to others. If I had more hosts, I could think of a way to automate this, but this is out of the scope of my question.
Then the twelve factor app guidelines define the release stage as producing a release that contains both the build and the config. This could simply mean sending your build (for example docker images of your app) to the target host. The built app and the target host configuration being at the same place (on the same host), they are de facto combined.
I don't however have any way to uniquely identify a release or have the possibility to rollback. In order to do that, I would have to store the config with the build somewhere so that I can get back to them if I need to. That's where I'm stuck: I can't figure out how one approaches this in practice.
What sounds contradictory is that config should be read from the environment and the possibility to rollback to a previous release, which implies a previous config.
Perhaps the following workflow would be an answer, but maybe convoluted:
send the build to the host,
read the host config (environment variables, etc.) and copy them to make a snapshot of this host's config at that moment,
store both the build and the config copy in a uniquely identified place
Such that when you want to run a particular release on a given host, you :
apply that release config to the host environment
run the build which will read the config from the environment
The step of making a snapshot of the environment's config to apply it again seem somewhat convoluted and I'd like to know if there is a more sensible way to think about the release stage.

Runtime Environment Variables in React for UAT and Live for a single build devops

Sorry if you think this has been asked before but there doesn't seem like a good solution anywhere.
I have a build pipeline that packages up my react app into a single artifact.
The release pipeline pushes that artifact to different Azure storage accounts for each environment (Dev, UAT, Live).
Surely there is a way to use DevOps variables to configure variables in my package per environment.
Other solutions:
One build per environment - I don't want to do this because I would need to create a branch for each environment, a pipeline for each, the env configs for each, and a release pipeline for each. This means a change to 1 environment takes 3 times as long. Also, the time to build these environments trebles.
Using a JSON file and swapping this out on deployment. - This didn't work because webpack imported the JSON file into the build so whilst I transformed the config.json files. It was too late. This seems similar to using env.development and env.live and would mean 3 builds
Pull environment out of the request URL and call an endpoint - seems like my only option but definitely has flaws.
This isn't an issue in .NET (or Java I believe, .NET is my background) and was solved years ago with web.config and appSettings.
Please let me know if you have solved this and how?
Thanks for your help
Runtime Environment Variables in React for UAT and Live for a single build devops
We could use the task Replace Tokens to use DevOps variables to configure variables in the package per environment:
The format of variable in .json file is #{TestVar}#.
And define the key's values on the Variables based on the stages:
Hope this helps.

ReactJS typical workflow - from development to deployment

My use case is composed of several ReactJs projects, where we collaborate using Git.
we are building a workflow to be used using Git, and that's our current thinking:
Each programmer works locally feching the next branch
They build their own branches but at the end its all merged to next
When all the pending tasks are done, we move create a branch test from next
Once test is fine, it is branched to beta
When stable, branched stable
This is the development phase.
For deployment, our first tought is to "build" the bundle on test, beta and stabe and copy it to the respective servers for running, as we keep built bundles on a normal filesystem (this is how we do it today, keep several bundles for several versions, not using Git)
Our production environment has dozen of servers in different customers, and every time we need to update, we need to copy the respective bundle from the correct directory to the server and install it (all bundles are build with an installation tool).
So, I have 2 doubts here:
a) Is the development workflow a good practice? Any suggestion?
b) How we make the deployment workflow smoother? Should we keep the bundles together in the code on Git ? Should we use something different?
Ideally we would need the server to autoupdate from our command. What would be the correct way to accomplish that ?

Octopus Deploy Prevent a Package from Deploying to another environment

I am working with Octopus deployment tool. We have a situation where we should not promote the binaries from DEV to QA. This is due to the reason where some features are still in development. We have another branch MAIN from where all the feature will be released. From here we will be generating build and deploying to QA and PROD.
In order to keep the build environment intact, we need to build and deploy only to DEV and should not be promoted.
I thought of creating a separate project specifically for DEV environment.
Before proceeding with this approach, I wanted to know if there any other better solution.
Raaj
You could create a separate lifecycle that has only the DEV environment in it to prevent it from being promoted. Octopus has a feature called channels which allows you to create releases that are only able to be deployed to the defined environments within that unique lifecycle.
https://octopus.com/docs/deployment-process/channels

Do organisations have a pipeline per environment or should one CI/CD pipeline deliver to dev, qa and prod?

Do organisations have a pipeline per environment or should one CI/CD pipeline deliver to dev, qa and prod?
I'm trying to understand if it's beneficial to have a continous deployment pipeline with jenkins that delivers to dev, tests it, if it passes it deploys to production.
IMHO one pipeline is enough.
you can minimize the environment issues by running the same test in 3 different environments (dev, qa and prod).
Jenkins implementation can be as having 3 different jobs for each environment, But Always deploy the builds and tests in order, as dev tested -> qa tested -> prod.
Ideally if the release build in prod is 1.0.0, then qa must have the one build upper, ie 1.0.1, and dev might have 1.0.2 and higher.
Valid question. I've seen and worked in environments with both. Neither is better, neither is more superior. It just depends on the need and SLAs.
If you need separation, for instance, you want the devs to have full control over the dev environments but not uat or prod, then multiple pipelines become easier than figuring out who has access to push what past what stage.
If you've got a small team and everyone knows everyone and you don't need to restrict anything then you can do one pipeline and then restrict it later as it grows.

Resources