app.yaml env_variables accessible as envs during cloud build steps - google-app-engine

as described in title, some frameworks like Next.js or Nuxt.js require some env vars defined in app.yaml (only accessible at runtime) to be accessible during build step, mainly npm run build
the workaround I'm using at the moment is to define the same env vars in 2 different places app.yaml and cloud build trigger env vars: It is not ideal at all

Agree, to maintain the same value in 2 places is the best way to lost the consistency and to create bugs. Thus, the best solution is to define only once these variables.
Put them where you want:
In a file (.env file for example)
In your trigger configuration
In the app.yaml file
But only once! Then create a step in your Cloud Build job that parse the configuration variable file and extract the values to put them at the correct location before continuing the build process.
grep, cut and sed works well for this. You can provide file examples in you need to have code example for this substitution.
Edit 1
According to your comment, there is few things to know. Cloud Build is great!! But Cloud Build is also boring...
The env var management is a perfect example. The short answer is: it's not possible to reuse an env var define in the step N in the step N+x.
To solve this, you need to do ugly things, like this
steps:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- -c
- |
MY_ENV_VAR="cool"
echo $${MY_ENV_VAR} > env-var-save.file
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- -c
- |
MY_ENV_VAR=$$(cat env-var-save.file)
echo $${MY_ENV_VAR}
Because only the /workspace directory is common between each step, you have to save the env vars in file and to read them afterwards in the step that you want.
About the app.yaml file, you can do something like this
app.yaml file content example
runtime: go114
service: go-serverless-oracle
env_variables:
ORACLE_IP: "##ORACLE_IP##"
Example of Cloud Build step, get the value from substituions variable in Cloud Build.
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- -c
- |
sed -i -e 's/##ORACLE_IP##/$_ORACLE_IP/g' app.yaml
cat app.yaml
substitutions:
_ORACLE_IP: 127.0.0.1

Related

How/where does Google App Engine cache YAML (changing `script`)?

Originally I had this in my app.yaml:
runtime: python39
- url: .*
script: main.app
I changed the script filename to gae.py and so updated app.yaml to thus:
runtime: python39
- url: .*
script: gae.app
The new version no longer starts up:
ModuleNotFoundError: No module named 'main'
I have tried changing the url but it's still trying to use a now-non-existent main.py to load the WSGI application. When I view the source of the version I see the correct app.yaml and file structure; I don't see main mentioned anywhere.
Any ideas?
The problem is not caching but rather that script is no longer used (and is ignored) in the Python 3 world of Google App Engine standard:
script: Optional. Specifies that requests to the specific handler should target your app. The only accepted value for the script element is auto because all traffic is served using the entrypoint command. In order to use static handlers, at least one of your handlers must contain the line script: auto or define an entrypoint element to deploy successfully.
(https://cloud.google.com/appengine/docs/standard/python3/config/appref#handlers_element)
The fix is to override entrypoint which defaults to:
/bin/sh -c exec gunicorn main:app --workers 2 -c /config/gunicorn.py
(https://cloud.google.com/appengine/docs/standard/python3/runtime#application_startup)

How can I pass a variable from github actions workflow to a GAE app.yaml file?

I have a django project I want to put into maintenance mode before I update (migrate) the database.
So, my github workflow
deploys my project with a variable MAINTENANCE_MODE set to true. This new deploy I understand will reboot any running instances, ensuring all instances only show my 'Site down for maintenance' 503.html page and won't be interacting with the database.
I launch a django VM in github actions, run my migrate, run collectstatic.
I set MAINTENANCE_MODE to false, I deploy a second time. This will re-enable production server with new code that now accesses a migrated database.
My question is, I am trying to use a single app.yaml file for both deploys. To pass the MAINTENANCE_MODE variable from github actions workflow to the app.yaml file, how can I do this?
I know you can import secrets like so:
runtime: python38
instance_class: F2
env_variables:
DB_URL: $ {{ secrets.DB_URL }}
But I don't know how to modify a secret in the workflow. Perhaps its not a secret but some other type of variable one can set in the workflow and access in the app.yaml?
So it appears that Google App Engine's yaml files do not support dynamic environment variable substitution. Static substitution (like using github's secrets) works, because github compiles the file with the github environment variable before the workflow runs, but there's no clear way to modify a file with a variable that is going to change during a workflow.
A method however that does work is to compile a new GAE yaml file during the workflow. Here's what I came up with in the end...
- name: Put in Maintenance mode
run: |
MAINTENANCE_MODE=1 envsubst < app_eng_staging.yml.template > app.yaml
cat app.yaml
gcloud app deploy --project staging-project --quiet
- name: Collectstatic and migrate
env:
RUNNING_ENVIRONMENT: 'Staging_Server'
DJANGO_DEBUG: 'False'
run: |
pipenv run python manage.py collectstatic --noinput
pipenv run python manage.py migrate
- name: Turn off Maintenance and Deploy
run: |
MAINTENANCE_MODE=0 envsubst < app_eng_staging.yml.template > app.yaml
gcloud app deploy --project staging-project --quiet
The trick is to use the linux envsubst command. We start with a app_eng_staging.yml.template file, which looks like so:
runtime: python38
instance_class: F2
env_variables:
RUNNING_ENVIRONMENT: 'Staging_Server'
StagingServerDB: $ {{ secrets.STAGINGSERVER_DB }}
MAINTENANCE_MODE: ${MAINTENANCE_MODE}
FRONTEND_URL: $ {{ secrets.STAGING_FRONTEND_URL }}
envsubst then populates ${MAINTENANCE_MODE} with the value 1 and the result is saved to a new file app.yaml.
After we finish working with out database migration, we can use envsubst to create a new app.yaml with MAINTENANCE_MODE set to zero (off), and re-deploy.
Neat huh?
There is a feature in GitHub Action for this called "GAE environment variable compiler"
Please read here

How to allow App Engine to authenticate and download private Go modules

My project uses Go modules hosted in private GitHub repositories.
Those are listed in my go.mod file, among the public ones.
On my local computer, I have no issue authenticating to the private repositories, by using the proper SSH key or API token in the project’s local git configuration file. The project compiles fine here.
Neither the git configuration nor the .netrc file are taken into account during the deployment (gcloud app deploy) and the build phase in the cloud, so my project compilation fails there with an authentication error for the private modules.
What is the best way to fix that? I would like to avoid a workaround which would consist in including the private modules’ source code in the deployed files, and have rather find a way to make the remote go or git use credentials I can provide.
You could try to deploy it directly from a build. According to the Accessing private GitHub repositories, you can set up git with key and domain on one of the build steps.
After that you can specify a step to run the gcloud app deploy command, as suggested in the Quickstart for automating App Engine deployments with Cloud Build.
An example of the cloudbuild.yaml necessary to do this would be:
# Decrypt the file containing the key
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args:
- kms
- decrypt
- --ciphertext-file=id_rsa.enc
- --plaintext-file=/root/.ssh/id_rsa
- --location=global
- --keyring=my-keyring
- --key=github-key
volumes:
- name: 'ssh'
path: /root/.ssh
# Set up git with key and domain.
- name: 'gcr.io/cloud-builders/git'
entrypoint: 'bash'
args:
- '-c'
- |
chmod 600 /root/.ssh/id_rsa
cat <<EOF >/root/.ssh/config
Hostname github.com
IdentityFile /root/.ssh/id_rsa
EOF
mv known_hosts /root/.ssh/known_hosts
volumes:
- name: 'ssh'
path: /root/.ssh
# Deploy app
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy"]
timeout: "16000s"

does appengine cloudbuild.yaml requires a custom runtime?

Build errors out with below output (Using a Rails app)
ERROR: (gcloud.app.deploy) There is a cloudbuild.yaml in the current directory, and the runtime field in /workspace/app.yaml is currently set to [runtime: ruby]. To use your cloudbuild.yaml to build a custom runtime, set the runtime field to [runtime: custom]. To continue using the [ruby] runtime, please remove the cloudbuild.yaml from this directory.
One way to deal with this is to change the name of the cloudbuild.yaml file to say cloud_build.yaml (you can also just move the file) and then go to your triggers in Cloud Build:
And change it from Autodetected to choosing your Cloud Build configuration file manually:
See this Github issue for some more information
Cloudbuild.yaml should work with App Engine Flexible without the need to use a custom runtime. As detailed in the error message, you cannot have the app.yaml and the cloudbuild.yaml in the same directory if you are deploying in a non-custom runtime, to remedy the situation, follow these steps:
Move the app.yaml and other ruby files into a subdirectory (use your original app.yaml, no need to use custom runtime)
Under your cloudbuild.yaml steps, modify the argument for app deploy by adding a third one specifying the app.yaml path.
Below is an example:
==================FROM:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy']
timeout: '1600s'
===================TO:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy', '[SUBDIRECTORY/app.yaml]']
timeout: '1600s'

Project Directory Does Not Contain app.yaml

I'm following the instructions here https://console.developers.google.com/start/appengine
I've downloaded and unzipped the project file from that page - it's the python and flask one. When I get to the instruction dev_appserver.py appending-try-python-flask it gives the error.
google.appengine.tools.devappserver2.errors.AppConfigNotFoundError: "." is a directory but does not contain app.yaml or app.yml
It most certainly does contain an app.yaml file. It looks like this.
application: hello-flask-app-engine
version: 1
runtime: python27
api_version: 1
threadsafe: yes
handlers:
- url: .*
script: main.app
libraries:
- name: jinja2
version: "2.6"
- name: markupsafe
version: "0.15"
Unlike this post Uploading a static project to google app engines mine doesn't have any skip files lines to delete.
There is a README.md that mostly follows the Google Dev web page, except that instead of downloading the project from that page it instructs to git clone https://github.com/GoogleCloudPlatform/appengine-python-flask-skeleton.git and that doesn't exactly match the zip file I downloaded.
The requirement.txt file says Run 'pip install -r requirements.txt -t lib/' but Windows 7 says pip is not a recognized command.
Is my app.yaml not correct? Why would it say it doesn't exist?
Simply closing the command prompt window and reopening it made it work. I don't know how or why.
You may be mis-typing appengine-try-python-flask (the real name) as appending-try-python-flask (which is what you show in your question).
If that's not the case, can you please show the effects of dir appengine-try-python-flask and dir appending-try-python-flask from the directory (AKA folder) you're now trying to run dev_appserver.py from?
probably you use py-charm, you need to add to configuration in line 'Working directory' the path of project like: C:\Users\user\Desktop\projects\water
It's repose to work

Resources