In the twelve factor app method, what does it mean to combine the build with the deploy's config? - 12factor

I'm finding inspiration from the twelve-factor app approach to organize the deployment process of small applications. I'm stuck on the release stage on guidelines that sound contradictory.
The twelve factor app says that the config should not be stored in files but in the environment, like environment variables. (I imagine that files sitting somewhere on the host can also serve as "config stored in the environment", such as a ssh private key in .ssh/private_key that will give access to some protected resource through ssh.)
I thus imagine just setting up my various hosts manually by setting those environment variables by hand (or in .bashrc or similar so I don't have to do it again every time they reboot). I only usually have 2 hosts: my laptop for development and a server for showing my work to others. If I had more hosts, I could think of a way to automate this, but this is out of the scope of my question.
Then the twelve factor app guidelines define the release stage as producing a release that contains both the build and the config. This could simply mean sending your build (for example docker images of your app) to the target host. The built app and the target host configuration being at the same place (on the same host), they are de facto combined.
I don't however have any way to uniquely identify a release or have the possibility to rollback. In order to do that, I would have to store the config with the build somewhere so that I can get back to them if I need to. That's where I'm stuck: I can't figure out how one approaches this in practice.
What sounds contradictory is that config should be read from the environment and the possibility to rollback to a previous release, which implies a previous config.
Perhaps the following workflow would be an answer, but maybe convoluted:
send the build to the host,
read the host config (environment variables, etc.) and copy them to make a snapshot of this host's config at that moment,
store both the build and the config copy in a uniquely identified place
Such that when you want to run a particular release on a given host, you :
apply that release config to the host environment
run the build which will read the config from the environment
The step of making a snapshot of the environment's config to apply it again seem somewhat convoluted and I'd like to know if there is a more sensible way to think about the release stage.

Related

main.js(compiled js-files) file from a angular artifact replace in another build artifacts

We have multiple environments, and we have environment files which having backend configarations and we are using these files during the builds.
Example: ng build --c UAT
But I have a issue here, now we decided to build only once and deployment multiple environments same artifact.
I know this is quite achievable using an Angular service and the APP_INITIALIZER token, but some reason we can't use this.
So I decided to after the build, modify the compiled js files(main.js) with respective env configuration values. But it's becoming difficult because of increased number of env variable and its patterns.
So I thought of follow below process, please suggest it can usable or i should not
1, I'll build UAT webpack(dist/artifact) using the "ng build --c UAT".
2, I'll do same for all other environments, now I have total 3 dist folders(webpacks).
3, I'll deploy the UAT artifact to all environment s, but before deploying it to Preprod I'll replace "main.js" file with Preprod artifact main.js file because only main.js file have the all environment configarations. and keep all other js files same.
4, I'll repeat same with prod deployment.
Please suggest on this approach.
You made a good choice to decide against environment specific builds, as they'll always come back to haunt you. But with your approach you only shifted the problem since you still need to tweak the build artifact. When your configuration is highly dynamic, I would suggest to reconsider your decision of not using a Service to load the data dynamically at runtime, or at least state the constraints why this approach doesn't work for you.
Assuming you still want to rely on static file content, th article How to use environment variables to configure your Angular application without a rebuild could be of interest for you. It suggests to load the data from an embedded env.js script, and expose it from there as an Angular service. While one could argue that this also only shifts the problem further, it at least allows your build artifact to remain untouched. For example, if you run your app from say an nginx docker container, you could replace values in env.js dynamically prior to the webserver start. I'm using a simplified version of this approach, and while it still feels a bit hacky, it does work! Good luck!

Deploying Create-React-App applications into different environments

I've got an app that I'm working on that consists of a Frontend - written using CRA - and a Backend - written in Java. We would like to support deploying this in multiple environments throughout our process, namely:
Local development machine - This is easy. Just run "npm start" and use the "development" environment
Local End-to-end tests - these build an infrastructure in Docker consisting of the Frontend, Backend and Database, and runs all of the tests against that
Development CI system
QA CI System
Pre-production System
Production System
The problem that I've got is - in the myriad of different systems, what is the best way for the Frontend to know where the Backend is? I'm also hoping that the Frontend can be deployed onto a CDN, so if it can be static files with the minimal complexity that would be preferred.
I've got the first two of these working so far - and they work by the use of the .env files and having a hard-coded hostname to call.
Once I get into the third and beyond - which is my next job - this falls down, because in each of the cases there is a different hostname to be calling, but with the same output of npm run build doing the work.
I've seen suggestions of:
Hard-code every option and determine it based on the current browser location in a big switch statement. That just scares me - not only do I have to maintain my deployment properties inside my source code, but the Production source will know the internal hostnames, which is arguably a security risk.
Run different builds with provided environment variables - e.g. REACT_APP_BACKEND=http://localhost:8080/backend npm run build. This will work, but then the code going into QA, PreProd and Prod are actually different and could arguably invalidate testing.
Adding another Javascript file that gets deployed separately onto the server and is loaded from index.html. That again means that the different test environments are technically different code.
Use DNS Tricks so that the hostnames are the same everywhere. That's just a mess.
Serve the files from an active web server, and have it dynamically write the URL out somehow. That would work, but we're hoping that the Production one can just be a simple CDN that just serves up static files.
So - what is the best way to manage this?
Cheers

Converting App Engine frontend versions to modules

I've slightly "abused" the front-end "version" concept in App Engine (java), to implement modules before they were introduced. I have a configuration consisting of: module1-dot-myapp.appspot.com, module2-dot-myapp.appspot.com, module3-dot-myapp.appspot.com, etc., based on the version concept (more commonly used with numbers: 1-dot-myapp, etc.).
Specifically, the code in all versions is identical, but each is practically used for different purposes. This separation allows different clients to use different api versions, separate deployment schedule, staging versions, logs separation, etc.
My question is, under theses conditions, what is the best way to convert my application to "real" modules? such that "module1" is an actual module (still mapped to the same url - module1-dot-appspot.com)?
Note: my answer comes from a somehow similar exercise but in the python GAE runtime, there is likely aditional Java-specific stuff to look at as well.
First things to look at (possible show stoppers) are the app-level configs - those will need to be merged in from your different old app versions (if they exist) and will be shared by all your modules (or directed to the default module only), so they might not work as before, best to revisit the latest documentation on these configs:
dispatch file
queue
cron
DB indexes
Note: in multi-module python apps these configs might not be updated automatically at app upload, each of them may need to be uploaded explicitly, using the respective app configuration utility options.
The separate deployment schedule is almost free (each module can be deployed independently). But there may be some impact due to the app-level configs (multiple CLI invocations instead of a single one, for example)
The logs separation comes for free.
The staging story might need to be revisited, depending on what exactly you mean by that.
Other than that - you'd bring the different old versions of your app in separate module sub-directories in your new app. Check if your version control system supports this easier. The old app config file(s) would need to be "translated" into the respective module's config file(s) and some of the info would go into the new app's top dir config file.
The module URL routing should allow transparent URL mapping, but note that the URLs will actually be <module>-dot-<appname>.appspot.com and the only way to get exactly the same URLs would be to delete all older app versions before deploying the new one (due to conflicting URLs: <module>-dot-<appname> vs <appversion>-dot-<appname>, not sure if you'll get the old or the new code serving or if it's even possible to deploy the new code without error). You could use a new appname at first, just to get all ducks lined up before the switchover (possibly a new staging story you might consider going forward).
You might find helpful complementing URL routing with a dispatch file if you didn't have one before.
Finally, if you have identical files shared across modules you may consider a single per-app copy of the file, symlinked into the respective modules, if that's easier or makes sense from your source code management prospective.

Twist to the standard “SQL database change workflow best practices”

Twist to the standard “SQL database change workflow best practices”
Background
ASP.NET/C# Web App
MS SQL
Environments
Production
UAT
Test
Dev
We create patch scripts (XML and sql) that are source controlled in Mercurial. We have cmd line utility that installs patches to DB (utitlity.exe install –patch) from a Release folder the build packages. Patches have meta data that helps with when patch should run and we log patches installed in a table in the target DB. All these were covered in the 3 year old question:
SQL Server database change workflow best practices
Our Problem/Twist
I think this works well for tables, views, functions and stored procedures. We struggle with application configuration data. Here are some touch points on application configurations.
New client. BA performs system study and fit analysis. Out of this comes a configuration word document of what application configurations need to be setup. Note some of these may also come in phases over time. We need to get these new configurations into the system for the developer and client UAT.
Developer works on feature request or bug fix. A new configuration change comes out of that change. The configuration needs to make it into the system for testing and promotion to UAT and up.
QA finds that the developer missed an associated configuration change. That configuration needs to make it into the system for promotion to UAT and up.
Build goes to UAT. Client performs acceptance testing but find they really want to change another unassociated configuration and have it promoted with the changes. In other words they found they want to change a business process by a configuration. The configuration needs to make it into the system for promotion to PRD.
As the client operates in PRD they may tweak application settings. These configurations need to make it into the system for future development and testing.
The general issue is making sure we are accounting for all the configurations and accidently not miss any during promotions which causes grief.
Our Attempts At A Process
a. We have had member of the QA team to write patches (xml and sql) and check those in. This requires a build to make sure those get into the package. With this approach it really just took care of item 1 above and we fell apart on the other items. The nice thing is for the items that made it into the patches it was just an install with the utility.
b. A developer threw together a Config page on the application. All the configurations could be uploaded and downloaded via XML document but it requires the app to be running. For item 1, member of QA team would manually setup configurations in the application and then would download the Config.xml file. This XML file would be used to upload configurations in other environments. We would use text diff tool to look at differences between config.xml files from different environments. This addressed item 1 and the others items but had problems. Problems were not all configurations made it into the XML document (just needs to be fixed by developer), some of the configurations didn’t have a UI in the application so you still had to manually go to the database on some, comparing the XML document with text diff was difficult at time (looked mostly due to sorting but I’m sure there are other issues), XML was not very human readable and finally the XML document did not allow for deleting existing incorrect or outdated configs.
c. Recently we went with option B, but over time for a new client we just started manually tracking configs and promoting them manually by hand (UI and DB) through the promotions. Needless to say lots of human errors.
So we have been looking at solutions. Eventually it would be great to get as much automation in as possible. I’m looking at going with the scripting approach and just focusing on process, documentation and looking at using Redgate data compare in addition to what we had been doing with compare on config.xml. With Redgate we have to create views though and there is no way to create update scripts from that approach except to manually update the scripts. It does at least allow a comparison without the app running. I’m also looking at pulling out the configs from our normal patches and making it a system independent of the build (utility.exe –patch –config). When I say focus on process it will be things like if we compare and find a config change either reported by client or not, we still script it, just means we have to have a process in place to quickly revalidate config install before promoting to the next level. As for documentation looking at making the original QA document a living document instead of just an upfront document. The goal is to try and enhance clarity and reduce missing configurations during promotion. Unfortunately it doesn’t improve speed of delivery.
Does anyone have any recommendations or best practices to pass along. Thanks.
Can I ask exactly what you mean by application configuration. I'm interpreting that as both:
Config files in the web application
Static reference data inside the database
Full disclosure I work for Red Gate. You might be interested in taking a look at Deployment Manager, it's a deployment tool that deploys applications, databases and configuration. It's free for up to 5 projects and target servers.
The approach it uses is to package application code and the database state into packages. These packages can be deployed into dev, test, staging and production environments. The same package is deployed to each environment.
Any application configuration that needs to change between environments is handled in one of the ways below:
Variable substitution in web.config. The tool allows you to specify override values for variables in these files, and set these per environment/server
Substituting the web.config file per environment.
Custom powershell scripts that are run pre/post deploy. You could use these to execute custom SQL based on the environment or server.
Static data within the database, using SQL Source Control's static
data feature. I've written a blog post about how to supply
different sets of static data to different environments/customers.
This allows you to source control the application configurations and deploy them to different environments.

Does it require direct XML modification to prepare an SSIS package for different environments?

I am maintaining some SSIS package I didn't build, and it creates an output file (.txt) and then emails that to a group. However, the package is actually fully configured for PROD. Some of the components I'll need to modify are: connection managers, pickup and drop off locations for the text file, mail servers, etc.
Am I going to have to just modify the XML directly to get this deployed to other environments?
Please note that I don't have access to do the deployments on these other environments - I simply have to hand it off to another team.
I'd almost post this as a comment, but if you're unfamiliar with SSIS, it's worth noting that there are about 3 ways the package can be linked to the config files.
You can certainly modify the config files. However, I'd regard setting up the config files for the environments as something the ops people should take ownership. If you need to set up connections for your development environment you've a slightly more complex problem.
The package may get the location from an environment variable, in which case you can just set up the config file for your development environment and configure the environment variable to point to it. You need to ensure that BIDS is running with the environment variable set, though.
If the configuration is supplied via a switch to DTExec you might be better off just setting the connections directly in the package. In this case the package won't use the config file unless you specify a path with DTExec /Config
If the path is hard coded into the package (i.e. a specified path rather than an environment variable) then you can adjust that path. However, the ops people will have to edit this as a part of the deployment process. Alternatively you could write a little .Net app that did the update and use that as a part of the deployment. The downside of this is it introduces scope for human error in deploying the packages.
If you need to maintain your config files manually I'd suggest you frig the indentation so it's a bit more readable. By default SSIS puts no whitespace in the files, but it doesn't mind if you do.

Resources