How to allow App Engine to authenticate and download private Go modules - google-app-engine

My project uses Go modules hosted in private GitHub repositories.
Those are listed in my go.mod file, among the public ones.
On my local computer, I have no issue authenticating to the private repositories, by using the proper SSH key or API token in the project’s local git configuration file. The project compiles fine here.
Neither the git configuration nor the .netrc file are taken into account during the deployment (gcloud app deploy) and the build phase in the cloud, so my project compilation fails there with an authentication error for the private modules.
What is the best way to fix that? I would like to avoid a workaround which would consist in including the private modules’ source code in the deployed files, and have rather find a way to make the remote go or git use credentials I can provide.

You could try to deploy it directly from a build. According to the Accessing private GitHub repositories, you can set up git with key and domain on one of the build steps.
After that you can specify a step to run the gcloud app deploy command, as suggested in the Quickstart for automating App Engine deployments with Cloud Build.
An example of the cloudbuild.yaml necessary to do this would be:
# Decrypt the file containing the key
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args:
- kms
- decrypt
- --ciphertext-file=id_rsa.enc
- --plaintext-file=/root/.ssh/id_rsa
- --location=global
- --keyring=my-keyring
- --key=github-key
volumes:
- name: 'ssh'
path: /root/.ssh
# Set up git with key and domain.
- name: 'gcr.io/cloud-builders/git'
entrypoint: 'bash'
args:
- '-c'
- |
chmod 600 /root/.ssh/id_rsa
cat <<EOF >/root/.ssh/config
Hostname github.com
IdentityFile /root/.ssh/id_rsa
EOF
mv known_hosts /root/.ssh/known_hosts
volumes:
- name: 'ssh'
path: /root/.ssh
# Deploy app
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy"]
timeout: "16000s"

Related

Accessing Cloud SQL from Cloud Build

I want to configure CI/CD from Cloud Repositories that builds my CMS (Directus) when I push to main repository.
In the build-time, the project needs to access Cloud SQL. But I get this error:
I tried this database configuration with gcloud app deploy and it connects Cloud SQL and runs.
cloudbuild.yaml (It crashes at second step, so I didn't add other steps for simplicity):
steps:
- name: node:16
entrypoint: npm
args: ['install']
dir: cms
- name: node:16
entrypoint: npm
args: ['run', 'start']
dir: cms
env:
- 'NODE_ENV=PRODUCTION'
- 'EXTENSIONS_PATH="./extensions"'
- 'DB_CLIENT=pg'
- 'DB_HOST=/cloudsql/XXX:europe-west1:XXX'
- 'DB_PORT="5432"'
- 'DB_DATABASE="XXXXX"'
- 'DB_USER="postgres"'
- 'DB_PASSWORD="XXXXXX"'
- 'KEY="XXXXXXXX"'
- 'SECRET="XXXXXXXXXXXX"'
Node-pg (node library) adds /.s.PGSQL.5432 at the end automatically. That's why it is not written in DB_HOST.
IAM roles:
How can I solve this error? I read so many answers in Stackoverflow but none of them helped me. I found this article but I didn't fully understand how to implement it in my case (https://cloud.google.com/sql/docs/postgres/connect-build).
Without your full Cloud Build yaml, it's hard to say for sure - but, it looks like you aren't following the steps in the documentation correctly.
Roughly what you should be doing is:
Downloading the cloud_sql_proxy into your container space
In a follow up step, start the cloud_sql_proxy then (in the same step) run your script, connecting to the proxy via either tcp or unix socket.
I don't see your yaml describing the proxy at all.

Gcloud cloud build local component failing with error "Error loading config file: unknown field "availableSecrets" in cloudbuild.Build"

Greetings stackoverflow community! First time asker, long time user.
I am testing out my cloudbuild.yaml file locally using Cloud Build Local component and Secret Manager and it is failing on "availableSecrets".
Error message: Error loading config file: unknown field "availableSecrets" in cloudbuild.Build
OS Platform: Windows 10/WSL2/Ubuntu 18.04
cloud-build-local: v0.5.2
Docker engine: v20.10.2
Nodejs version: v14.15.3
NPM version: 6.14.9
gcloud version: 326.0.0
Installed components: [BigQuery Command Line Tool, Cloud Datastore Emulator, Cloud SDK Core Libraries, Cloud Storage Command Line Tool, Google Cloud Build Local Builder, gcloud Beta Commands]
Documentation on Cloud Build build file: https://cloud.google.com/cloud-build/docs/build-config
Documentation to configure secrets with cloud build: https://cloud.google.com/cloud-build/docs/securing-builds/use-secrets
Documentation for cloud build local: https://cloud.google.com/cloud-build/docs/build-debug-locally
Steps performed:
Added secrets to Secret Manager
Enabled API between Cloud Build and Secrets Manager
Added cloudbuild service account as member of each secret password.
Added IAM permission Secret Manager Secrets Accessor to cloudbuild user. I don't know where I got this info from but it is residual at this point from other attempts to use Secret Manager with cloudbuild. I am not sure of the difference between applying access here vs applying to the Secret Manager secret.
Command: cloud-build-local --config=cloudbuild.staging.yaml --dryrun=false .
cloudbuild.staging.yaml:
- name: gcr.io/cloud-builders/npm
entrypoint: 'npm'
args: [ 'install' ]
- name: 'gcr.io/cloud-builders/gcloud'
args: ["app", "deploy"]
env:
- 'DAO_FACTORY=datastore'
- 'POLL_INTERVAL=15'
- 'PROMPT=staging>'
- 'ENVIRONMENT=staging'
- 'NAMESPACE=staging'
- 'RESET_DATASTORE=false'
secretEnv: ['ADMIN_USER', 'SUPER_ADMINS', 'BOT_TOKEN']
availableSecrets:
secretManager:
- versionName: projects/{project token}/secrets/SYSTEM_USER/versions/1
env: 'ADMIN_USER'
- versionName: projects/{project token}/secrets/SUPER_ADMINS/versions/1
env: 'SUPER_ADMINS'
- versionName: projects/{project token}/secrets/BOT_TOKEN/versions/2
env: 'BOT_TOKEN'```
Tag: cloud-build-local. I guess without reputation a meaningful tag cannot be created. Maybe an esteemed community member will create this as this may be specific to cloud-build-local only.
Support for Google Secret Manager in Google Cloud Build descriptor file is apparently very new and does not appear to be supported by cloud-build-local component at this time; please see comment from Guillaume about feature being a week old. When cloud build descriptor is ran in Cloud Build, it works fine.
I fixed a similar issue by upgrading the gcloud tool.

app.yaml env_variables accessible as envs during cloud build steps

as described in title, some frameworks like Next.js or Nuxt.js require some env vars defined in app.yaml (only accessible at runtime) to be accessible during build step, mainly npm run build
the workaround I'm using at the moment is to define the same env vars in 2 different places app.yaml and cloud build trigger env vars: It is not ideal at all
Agree, to maintain the same value in 2 places is the best way to lost the consistency and to create bugs. Thus, the best solution is to define only once these variables.
Put them where you want:
In a file (.env file for example)
In your trigger configuration
In the app.yaml file
But only once! Then create a step in your Cloud Build job that parse the configuration variable file and extract the values to put them at the correct location before continuing the build process.
grep, cut and sed works well for this. You can provide file examples in you need to have code example for this substitution.
Edit 1
According to your comment, there is few things to know. Cloud Build is great!! But Cloud Build is also boring...
The env var management is a perfect example. The short answer is: it's not possible to reuse an env var define in the step N in the step N+x.
To solve this, you need to do ugly things, like this
steps:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- -c
- |
MY_ENV_VAR="cool"
echo $${MY_ENV_VAR} > env-var-save.file
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- -c
- |
MY_ENV_VAR=$$(cat env-var-save.file)
echo $${MY_ENV_VAR}
Because only the /workspace directory is common between each step, you have to save the env vars in file and to read them afterwards in the step that you want.
About the app.yaml file, you can do something like this
app.yaml file content example
runtime: go114
service: go-serverless-oracle
env_variables:
ORACLE_IP: "##ORACLE_IP##"
Example of Cloud Build step, get the value from substituions variable in Cloud Build.
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- -c
- |
sed -i -e 's/##ORACLE_IP##/$_ORACLE_IP/g' app.yaml
cat app.yaml
substitutions:
_ORACLE_IP: 127.0.0.1

How to have dynamic version name at run time when deploying google app engine in Travis CI?

I am studying to automate the build and deployment of my google app engine application in Travis, so far it allows me to have static or predefined version name during deployment in .travis.yml.
Is there any way to make it dynamically generated at runtime? Like for example below in my .travis.yml file, I have deployment for production and staging version of the application, both are named or labeled as production and qa-staging, and I would like to suffix the version names with a timestamp or anything as long as it would be unique every successful build and deployment.
language: node_js
node_js:
- "10"
before_install:
- openssl aes-256-cbc -K $encrypted_c423808ed406_key -iv $encrypted_c423808ed406_iv
-in gae-creds.json.enc -out gae-creds.json -d
- chmod +x test.sh
- cat gae-creds.json
install:
- npm install
script:
- "./test.sh"
deploy:
- provider: gae
skip_cleanup: true
keyfile: gae-creds.json
project: traviscicd
no_promote: true
version: qa-staging
on:
branch: staging
- provider: gae
skip_cleanup: true
keyfile: gae-creds.json
project: traviscicd
version: production
on:
branch: master
Have you tried with https://yaml.org/type/timestamp.html ?
Im not sure if the context is the correct but seems a good and elegant option for your yaml file.
Perhaps you can use go generate to generate a version string that can be included? You need to run go generate as part of the build process for it to work, though.

does appengine cloudbuild.yaml requires a custom runtime?

Build errors out with below output (Using a Rails app)
ERROR: (gcloud.app.deploy) There is a cloudbuild.yaml in the current directory, and the runtime field in /workspace/app.yaml is currently set to [runtime: ruby]. To use your cloudbuild.yaml to build a custom runtime, set the runtime field to [runtime: custom]. To continue using the [ruby] runtime, please remove the cloudbuild.yaml from this directory.
One way to deal with this is to change the name of the cloudbuild.yaml file to say cloud_build.yaml (you can also just move the file) and then go to your triggers in Cloud Build:
And change it from Autodetected to choosing your Cloud Build configuration file manually:
See this Github issue for some more information
Cloudbuild.yaml should work with App Engine Flexible without the need to use a custom runtime. As detailed in the error message, you cannot have the app.yaml and the cloudbuild.yaml in the same directory if you are deploying in a non-custom runtime, to remedy the situation, follow these steps:
Move the app.yaml and other ruby files into a subdirectory (use your original app.yaml, no need to use custom runtime)
Under your cloudbuild.yaml steps, modify the argument for app deploy by adding a third one specifying the app.yaml path.
Below is an example:
==================FROM:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy']
timeout: '1600s'
===================TO:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy', '[SUBDIRECTORY/app.yaml]']
timeout: '1600s'

Resources