Azure StaticWebApps and behavior of dotEnv to get environment variables - reactjs

Hi I created a basic project using Azure static web apps and added 2 env files in it ".env" and ".env.production".
I observed all the new Azure staticwebapps were being started in "production environment" by default.
So let's say theres a key "REACT_APP_API_URL", which have different values in both the env files. I was able to switch keys using by setting the appropriate environment.
After that, I wanted to test if environment variables can override these .env files. So I in the pipeline I added the environment variable to modify the key.
trigger:
- dev
pool:
vmImage: ubuntu-latest
steps:
- task: AzureStaticWebApp#0
inputs:
app_location: ""
output_location: "build"
env:
azure_static_web_apps_api_token: $(deployment_token)
REACT_APP_API_URL: "Some value diff than .env and .env.prod"
It does override the .env files. Can someone kindly explain, how it is able to do that?
I have checked Dot env package doc and I still dont understand which values take priority in case both the .env files and environment variables are present.

Related

How can I pass a variable from github actions workflow to a GAE app.yaml file?

I have a django project I want to put into maintenance mode before I update (migrate) the database.
So, my github workflow
deploys my project with a variable MAINTENANCE_MODE set to true. This new deploy I understand will reboot any running instances, ensuring all instances only show my 'Site down for maintenance' 503.html page and won't be interacting with the database.
I launch a django VM in github actions, run my migrate, run collectstatic.
I set MAINTENANCE_MODE to false, I deploy a second time. This will re-enable production server with new code that now accesses a migrated database.
My question is, I am trying to use a single app.yaml file for both deploys. To pass the MAINTENANCE_MODE variable from github actions workflow to the app.yaml file, how can I do this?
I know you can import secrets like so:
runtime: python38
instance_class: F2
env_variables:
DB_URL: $ {{ secrets.DB_URL }}
But I don't know how to modify a secret in the workflow. Perhaps its not a secret but some other type of variable one can set in the workflow and access in the app.yaml?
So it appears that Google App Engine's yaml files do not support dynamic environment variable substitution. Static substitution (like using github's secrets) works, because github compiles the file with the github environment variable before the workflow runs, but there's no clear way to modify a file with a variable that is going to change during a workflow.
A method however that does work is to compile a new GAE yaml file during the workflow. Here's what I came up with in the end...
- name: Put in Maintenance mode
run: |
MAINTENANCE_MODE=1 envsubst < app_eng_staging.yml.template > app.yaml
cat app.yaml
gcloud app deploy --project staging-project --quiet
- name: Collectstatic and migrate
env:
RUNNING_ENVIRONMENT: 'Staging_Server'
DJANGO_DEBUG: 'False'
run: |
pipenv run python manage.py collectstatic --noinput
pipenv run python manage.py migrate
- name: Turn off Maintenance and Deploy
run: |
MAINTENANCE_MODE=0 envsubst < app_eng_staging.yml.template > app.yaml
gcloud app deploy --project staging-project --quiet
The trick is to use the linux envsubst command. We start with a app_eng_staging.yml.template file, which looks like so:
runtime: python38
instance_class: F2
env_variables:
RUNNING_ENVIRONMENT: 'Staging_Server'
StagingServerDB: $ {{ secrets.STAGINGSERVER_DB }}
MAINTENANCE_MODE: ${MAINTENANCE_MODE}
FRONTEND_URL: $ {{ secrets.STAGING_FRONTEND_URL }}
envsubst then populates ${MAINTENANCE_MODE} with the value 1 and the result is saved to a new file app.yaml.
After we finish working with out database migration, we can use envsubst to create a new app.yaml with MAINTENANCE_MODE set to zero (off), and re-deploy.
Neat huh?
There is a feature in GitHub Action for this called "GAE environment variable compiler"
Please read here

How to pass environment variables to the app.yaml using cloud build

The final step of my CI/CD is the deployment using gcloud app deploy, but I can't commit the app.yaml with my environment variables, so how to deploy using cloud build passing the env variables do the app.yaml?
Here is my cloudbuild.yaml
steps:
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy"]
timeout: "1800s"
One easy option is to have your environment variables listed in a file (or even the app.yaml file itself) in Cloud Storage. You can then use the cloud-builders/gsutil to retrieve this file in a build step like this:
steps:
- name: gcr.io/cloud-builders/gsutil
args: ['cp', 'gs://mybucket/env_vars.txt', 'env_vars.txt']
This will copy the file to the /workspace directory. The next build step can then populate the app.yaml file with the environment variables (or even just copy the retrieved app.yaml file to the correct path). The next and final step would the one you mentioned to deploy the app.
Note that, when executed in the Cloud Build environment, commands are executed with credentials of the builder service account for the project. You'll need to grant access to the file on Cloud Storage to that service account.

does appengine cloudbuild.yaml requires a custom runtime?

Build errors out with below output (Using a Rails app)
ERROR: (gcloud.app.deploy) There is a cloudbuild.yaml in the current directory, and the runtime field in /workspace/app.yaml is currently set to [runtime: ruby]. To use your cloudbuild.yaml to build a custom runtime, set the runtime field to [runtime: custom]. To continue using the [ruby] runtime, please remove the cloudbuild.yaml from this directory.
One way to deal with this is to change the name of the cloudbuild.yaml file to say cloud_build.yaml (you can also just move the file) and then go to your triggers in Cloud Build:
And change it from Autodetected to choosing your Cloud Build configuration file manually:
See this Github issue for some more information
Cloudbuild.yaml should work with App Engine Flexible without the need to use a custom runtime. As detailed in the error message, you cannot have the app.yaml and the cloudbuild.yaml in the same directory if you are deploying in a non-custom runtime, to remedy the situation, follow these steps:
Move the app.yaml and other ruby files into a subdirectory (use your original app.yaml, no need to use custom runtime)
Under your cloudbuild.yaml steps, modify the argument for app deploy by adding a third one specifying the app.yaml path.
Below is an example:
==================FROM:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy']
timeout: '1600s'
===================TO:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy', '[SUBDIRECTORY/app.yaml]']
timeout: '1600s'

How to use CHE_EXTRA_VOLUME_MOUNT?

Use Case
The code that I wish to edit in che is downloaded from a private SVN repository and uses a private nexus repository for maven dependencies. Due to this I need to use my custom settings.xml from "C:\Users\.m2".
It would be good to use the local maven repository too, hence the approach of creating a custom dockerfile that adds settings.xml was not used.
Setup
I created a user environment variable "CHE_EXTRA_VOLUME_MOUNT" with the value "~/.m2:/home/user/.m2".
I can see the env variable from "Docker Quickstart Terminal".
Environment
OS: Windows 7
Docker version: 1.12.6, build 78d1802
Docker image: eclipse/che-server:5.0.0
Problem
Can't see the mount path "/home/user/.m2" in any workspace.
Can someone please help me with this use case?
I see a couple issues. First, in the che.env file, you should be modifying CHE_WORKSPACE_VOLUME. The CHE_EXTRA_VOLUME_MOUNT is an older name that applied to the 4.x releases.
Second, the mount path you are using. The value that you provided on the mount path is likely not going to work well if it's on Windows 7. This is because you are using Boot2Docker on that system, and so VirtualBox limits files that can be mounted to those that exist as a subfolder of %userprofile%.
So:
1. First make sure that c:\Users\.m2 is part of this subfolder, and then:
2. Use the absolute path to your .m2 folder in the mount in the che.env:
CHE_WORKSPACE_VOLUME=/C/Users/<user_name>/.m2:/home/user/.m2
This funky path naming for volume mounts is a limitation in how the Docker client can understand volume mounts if you are using it on the batch shell.
A matching answer is posted on Che's support site - https://github.com/eclipse/che/issues/3888
Looks like it is a bug in eclipse che. You can create an issue at https://github.com/eclipse/che/issues

Time out error when trying to create Google managed vm

I'm trying to create a managed vm for my node 4 application using google custom runtime.
I created the following Dockerfile:
FROM node:4.2.1
ENV PORT 8080
ADD package.json package.json
RUN npm install
ADD . .
CMD [ "npm", "start" ]
Along with this app.yaml:
# [START runtime]
runtime: custom
vm: true
api_version: 1
# [END runtime]
health_check:
enable_health_check: false
skip_files:
- ^(.*/)?#.*#$
- ^(.*/)?.*~$
- ^(.*/)?.*\.py[co]$
- ^(.*/)?.*/RCS/.*$
- ^(.*/)?\..*$
- ^(.*/)?.*/node_modules/.*$
- ^(.*/)?.*\.log$
I deploy the app using gcloud preview command:
gcloud preview app deploy app.yaml --promote
It seems like the docker is being built correctly but the at the end of the process I get this message:
Copying files to Google Cloud Storage...
Synchronizing files to [gs://staging.my-project-id.appspot.com/].
Updating module [default]...\Deleted [https://www.googleapis.com/compute/v1/projects/my-project-id/zones/us-central1-f/instances/gae-builder-vm-20151030t142257].
Updating module [default]...failed.
ERROR: (gcloud.preview.app.deploy) Error Response: [4] Timed out creating VMs.
I have my deployment working now. I have had to troubleshoot the same problem before, for another project, but I didn't have the code on hand, so I had to work through the problems again.
The deployment ran smoothly up until the last steps, where updating the module would timeout. This made me think it was something to do with the application starting up on VM and not responding appropriately, so the final hook would time out.
You'll find a lot of information here - https://cloud.google.com/appengine/docs/managed-vms/config . I checked the following things:
logging - ensure that you are writing to the correct log file. See https://cloud.google.com/appengine/docs/managed-vms/custom-runtimes#logging
ensure you have a .dockerignore file and are skipping files in app.yaml so you are not asking the process to copy across unneeded node_modules or log files
turn off health checking if you are not using it, or ensure you have the correct express.js routes configured for it
check that your environment variables are set and match what GAE can use. This was my final step - GAE will let you bind to a VM port on 8080. I had to pass through a NODE_ENV flag in my app.yaml which told the app to use 8080 and not 3000.
Lift the resources of the GAE instance in app.yaml. I specified two logical CPUs and made the ram 2 gig.
Good luck.

Resources