Accessing Cloud SQL from Cloud Build - google-app-engine

I want to configure CI/CD from Cloud Repositories that builds my CMS (Directus) when I push to main repository.
In the build-time, the project needs to access Cloud SQL. But I get this error:
I tried this database configuration with gcloud app deploy and it connects Cloud SQL and runs.
cloudbuild.yaml (It crashes at second step, so I didn't add other steps for simplicity):
steps:
- name: node:16
entrypoint: npm
args: ['install']
dir: cms
- name: node:16
entrypoint: npm
args: ['run', 'start']
dir: cms
env:
- 'NODE_ENV=PRODUCTION'
- 'EXTENSIONS_PATH="./extensions"'
- 'DB_CLIENT=pg'
- 'DB_HOST=/cloudsql/XXX:europe-west1:XXX'
- 'DB_PORT="5432"'
- 'DB_DATABASE="XXXXX"'
- 'DB_USER="postgres"'
- 'DB_PASSWORD="XXXXXX"'
- 'KEY="XXXXXXXX"'
- 'SECRET="XXXXXXXXXXXX"'
Node-pg (node library) adds /.s.PGSQL.5432 at the end automatically. That's why it is not written in DB_HOST.
IAM roles:
How can I solve this error? I read so many answers in Stackoverflow but none of them helped me. I found this article but I didn't fully understand how to implement it in my case (https://cloud.google.com/sql/docs/postgres/connect-build).

Without your full Cloud Build yaml, it's hard to say for sure - but, it looks like you aren't following the steps in the documentation correctly.
Roughly what you should be doing is:
Downloading the cloud_sql_proxy into your container space
In a follow up step, start the cloud_sql_proxy then (in the same step) run your script, connecting to the proxy via either tcp or unix socket.
I don't see your yaml describing the proxy at all.

Related

Release fails when deploying React app to Azure Web App with Azure DevOps

I cannot get a Release pipeline in Azure DevOps to successfully deploy build files from a React app to an Azure App Service.
This is the YAML file for the app:
trigger:
- main
variables:
buildConfiguration: 'Release'
stages:
- stage: Build
displayName: 'Build my web application'
jobs:
- job: 'Build'
displayName: 'Build job'
pool:
vmImage: ubuntu-latest
demands:
- npm
steps:
- task: NodeTool#0
inputs:
versionSpec: '16.x'
displayName: 'Install Node.js'
- script: |
npm install
npm run build
displayName: 'npm install and build'
- task: PublishBuildArtifacts#1
inputs:
PathtoPublish: 'build'
ArtifactName: 'drop'
publishLocation: 'Container'
displayName: 'Build artifact'
As you'd expect, this puts the resultant build files in 'drop'. I can confirm this by inspecting the contents of 'drop' as it is a Published Artifact I can click on in the Summary tab for the Build process.
It's the Release that fails. This is the log for the release:
2022-03-28T11:29:39.9940600Z ##[section]Starting: Azure Web App Deploy: my-app-serv
2022-03-28T11:29:39.9952321Z ==============================================================================
2022-03-28T11:29:39.9952723Z Task : Azure Web App
2022-03-28T11:29:39.9953008Z Description : Deploy an Azure Web App for Linux or Windows
2022-03-28T11:29:39.9953295Z Version : 1.200.0
2022-03-28T11:29:39.9953540Z Author : Microsoft Corporation
2022-03-28T11:29:39.9953833Z Help : https://aka.ms/azurewebapptroubleshooting
2022-03-28T11:29:39.9954210Z ==============================================================================
2022-03-28T11:29:40.3697650Z Got service connection details for Azure App Service:'my-app-serv'
2022-03-28T11:29:42.3999385Z Package deployment using ZIP Deploy initiated.
2022-03-28T11:30:18.0663125Z Updating submodules.
2022-03-28T11:30:18.0670674Z Preparing deployment for commit id 'dc023bbe-d'.
2022-03-28T11:30:18.0672154Z Repository path is /tmp/zipdeploy/extracted
2022-03-28T11:30:18.0673178Z Running oryx build...
2022-03-28T11:30:19.1423345Z Command: oryx build /tmp/zipdeploy/extracted -o /home/site/wwwroot --platform nodejs --platform-version 16 -i /tmp/8da10ae4b1f9200 -p compress_node_modules=tar-gz --log-file /tmp/build-debug.log
2022-03-28T11:30:19.1431972Z Operation performed by Microsoft Oryx, https://github.com/Microsoft/Oryx
2022-03-28T11:30:19.1453191Z You can report issues at https://github.com/Microsoft/Oryx/issues
2022-03-28T11:30:19.1453685Z
2022-03-28T11:30:19.1454256Z Oryx Version: 0.2.20211207.1, Commit: 46633df49cc8fbe9718772a3c894df221273b2af, ReleaseTagName: 20211207.1
2022-03-28T11:30:19.1457307Z
2022-03-28T11:30:19.1463475Z Build Operation ID: |DTbD+7CrQyM=.49dfa157_
2022-03-28T11:30:19.1465355Z Repository Commit : dc023bbe-d46e-46f2-9d49-6e8157706c19
2022-03-28T11:30:19.1465695Z
2022-03-28T11:30:19.1466122Z Detecting platforms...
2022-03-28T11:30:19.1466558Z Could not detect any platform in the source directory.
2022-03-28T11:30:19.1467416Z Error: Couldn't detect a version for the platform 'nodejs' in the repo.
2022-03-28T11:30:19.1469069Z Error: Couldn't detect a version for the platform 'nodejs' in the repo.\n/opt/Kudu/Scripts/starter.sh oryx build /tmp/zipdeploy/extracted -o /home/site/wwwroot --platform nodejs --platform-version 16 -i /tmp/8da10ae4b1f9200 -p compress_node_modules=tar-gz --log-file /tmp/build-debug.log
2022-03-28T11:30:19.1469950Z Deployment Failed.
2022-03-28T11:30:19.1510175Z ##[error]Failed to deploy web package to App Service.
2022-03-28T11:30:19.1525344Z ##[error]To debug further please check Kudu stack trace URL : https://$my-app-serv:***#my-app-serv.scm.azurewebsites.net/api/vfs/LogFiles/kudu/trace
2022-03-28T11:30:19.1527823Z ##[error]Error: Package deployment using ZIP Deploy failed. Refer logs for more details.
2022-03-28T11:30:30.1233247Z Successfully added release annotation to the Application Insight : my-app-serv
2022-03-28T11:30:32.2997996Z Successfully updated deployment History at (CUT)
2022-03-28T11:30:34.0322983Z App Service Application URL: http://my-app-serv.azurewebsites.net
2022-03-28T11:30:34.0390276Z ##[section]Finishing: Azure Web App Deploy: my-app-serv
The Release uses Azure Web App Deploy. App Type is 'Web App on Linux'. 'Package or Folder' is the 'drop' folder. Runtime stack is '16 LTS (NODE|16-lts)' (but it also doesn't work if that's empty).
The drop folder does not contain zipped output. I don't understand why the Release operation is referred to as a Zip Deploy. Am I missing something to avoid the error 'Error: Couldn't detect a version for the platform 'nodejs' in the repo.'?
I'm just expecting the contents in the 'drop' folder to be successfully copied to App Service, and the web app run so I can test it (and in the long time, setup automated tests).
I've tried a number of different things with the Build, including zipping the build artifacts, with no luck. I don't think the build is the problem though, as the files in the 'drop' folder are the files I want copied.
So I think it's the Release that's the problem. But that looks so simple.
I start with an Agent and add an Azure Web App deployment task. It seems to successfully pickup the drop folder, as I've tried other values that show an obvious error when that is wrong. The target App Service is Linux, so the Web App Deploy App type is set to 'Web App on Linux'.
I've seen a few different approaches in stackoverflow, but no answers to this approach. Maybe I'm going about this the wrong way, but on the surface it looks right, as if I get this right, I can easily manage manual deployments, authorisations, etc. as supported by Releases.
Thanks in advance
One of the possible workarounds that you can try is to make the SCM_DO_BUILD_DURING_DEPLOYMENT= FALSE.
After making this as false, you should be able to deploy the app.
Also please refer these links with similar issue for more information.
Reference 1 ,
Reference 2

Gcloud cloud build local component failing with error "Error loading config file: unknown field "availableSecrets" in cloudbuild.Build"

Greetings stackoverflow community! First time asker, long time user.
I am testing out my cloudbuild.yaml file locally using Cloud Build Local component and Secret Manager and it is failing on "availableSecrets".
Error message: Error loading config file: unknown field "availableSecrets" in cloudbuild.Build
OS Platform: Windows 10/WSL2/Ubuntu 18.04
cloud-build-local: v0.5.2
Docker engine: v20.10.2
Nodejs version: v14.15.3
NPM version: 6.14.9
gcloud version: 326.0.0
Installed components: [BigQuery Command Line Tool, Cloud Datastore Emulator, Cloud SDK Core Libraries, Cloud Storage Command Line Tool, Google Cloud Build Local Builder, gcloud Beta Commands]
Documentation on Cloud Build build file: https://cloud.google.com/cloud-build/docs/build-config
Documentation to configure secrets with cloud build: https://cloud.google.com/cloud-build/docs/securing-builds/use-secrets
Documentation for cloud build local: https://cloud.google.com/cloud-build/docs/build-debug-locally
Steps performed:
Added secrets to Secret Manager
Enabled API between Cloud Build and Secrets Manager
Added cloudbuild service account as member of each secret password.
Added IAM permission Secret Manager Secrets Accessor to cloudbuild user. I don't know where I got this info from but it is residual at this point from other attempts to use Secret Manager with cloudbuild. I am not sure of the difference between applying access here vs applying to the Secret Manager secret.
Command: cloud-build-local --config=cloudbuild.staging.yaml --dryrun=false .
cloudbuild.staging.yaml:
- name: gcr.io/cloud-builders/npm
entrypoint: 'npm'
args: [ 'install' ]
- name: 'gcr.io/cloud-builders/gcloud'
args: ["app", "deploy"]
env:
- 'DAO_FACTORY=datastore'
- 'POLL_INTERVAL=15'
- 'PROMPT=staging>'
- 'ENVIRONMENT=staging'
- 'NAMESPACE=staging'
- 'RESET_DATASTORE=false'
secretEnv: ['ADMIN_USER', 'SUPER_ADMINS', 'BOT_TOKEN']
availableSecrets:
secretManager:
- versionName: projects/{project token}/secrets/SYSTEM_USER/versions/1
env: 'ADMIN_USER'
- versionName: projects/{project token}/secrets/SUPER_ADMINS/versions/1
env: 'SUPER_ADMINS'
- versionName: projects/{project token}/secrets/BOT_TOKEN/versions/2
env: 'BOT_TOKEN'```
Tag: cloud-build-local. I guess without reputation a meaningful tag cannot be created. Maybe an esteemed community member will create this as this may be specific to cloud-build-local only.
Support for Google Secret Manager in Google Cloud Build descriptor file is apparently very new and does not appear to be supported by cloud-build-local component at this time; please see comment from Guillaume about feature being a week old. When cloud build descriptor is ran in Cloud Build, it works fine.
I fixed a similar issue by upgrading the gcloud tool.

How to allow App Engine to authenticate and download private Go modules

My project uses Go modules hosted in private GitHub repositories.
Those are listed in my go.mod file, among the public ones.
On my local computer, I have no issue authenticating to the private repositories, by using the proper SSH key or API token in the project’s local git configuration file. The project compiles fine here.
Neither the git configuration nor the .netrc file are taken into account during the deployment (gcloud app deploy) and the build phase in the cloud, so my project compilation fails there with an authentication error for the private modules.
What is the best way to fix that? I would like to avoid a workaround which would consist in including the private modules’ source code in the deployed files, and have rather find a way to make the remote go or git use credentials I can provide.
You could try to deploy it directly from a build. According to the Accessing private GitHub repositories, you can set up git with key and domain on one of the build steps.
After that you can specify a step to run the gcloud app deploy command, as suggested in the Quickstart for automating App Engine deployments with Cloud Build.
An example of the cloudbuild.yaml necessary to do this would be:
# Decrypt the file containing the key
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args:
- kms
- decrypt
- --ciphertext-file=id_rsa.enc
- --plaintext-file=/root/.ssh/id_rsa
- --location=global
- --keyring=my-keyring
- --key=github-key
volumes:
- name: 'ssh'
path: /root/.ssh
# Set up git with key and domain.
- name: 'gcr.io/cloud-builders/git'
entrypoint: 'bash'
args:
- '-c'
- |
chmod 600 /root/.ssh/id_rsa
cat <<EOF >/root/.ssh/config
Hostname github.com
IdentityFile /root/.ssh/id_rsa
EOF
mv known_hosts /root/.ssh/known_hosts
volumes:
- name: 'ssh'
path: /root/.ssh
# Deploy app
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy"]
timeout: "16000s"

How to have dynamic version name at run time when deploying google app engine in Travis CI?

I am studying to automate the build and deployment of my google app engine application in Travis, so far it allows me to have static or predefined version name during deployment in .travis.yml.
Is there any way to make it dynamically generated at runtime? Like for example below in my .travis.yml file, I have deployment for production and staging version of the application, both are named or labeled as production and qa-staging, and I would like to suffix the version names with a timestamp or anything as long as it would be unique every successful build and deployment.
language: node_js
node_js:
- "10"
before_install:
- openssl aes-256-cbc -K $encrypted_c423808ed406_key -iv $encrypted_c423808ed406_iv
-in gae-creds.json.enc -out gae-creds.json -d
- chmod +x test.sh
- cat gae-creds.json
install:
- npm install
script:
- "./test.sh"
deploy:
- provider: gae
skip_cleanup: true
keyfile: gae-creds.json
project: traviscicd
no_promote: true
version: qa-staging
on:
branch: staging
- provider: gae
skip_cleanup: true
keyfile: gae-creds.json
project: traviscicd
version: production
on:
branch: master
Have you tried with https://yaml.org/type/timestamp.html ?
Im not sure if the context is the correct but seems a good and elegant option for your yaml file.
Perhaps you can use go generate to generate a version string that can be included? You need to run go generate as part of the build process for it to work, though.

Time out error when trying to create Google managed vm

I'm trying to create a managed vm for my node 4 application using google custom runtime.
I created the following Dockerfile:
FROM node:4.2.1
ENV PORT 8080
ADD package.json package.json
RUN npm install
ADD . .
CMD [ "npm", "start" ]
Along with this app.yaml:
# [START runtime]
runtime: custom
vm: true
api_version: 1
# [END runtime]
health_check:
enable_health_check: false
skip_files:
- ^(.*/)?#.*#$
- ^(.*/)?.*~$
- ^(.*/)?.*\.py[co]$
- ^(.*/)?.*/RCS/.*$
- ^(.*/)?\..*$
- ^(.*/)?.*/node_modules/.*$
- ^(.*/)?.*\.log$
I deploy the app using gcloud preview command:
gcloud preview app deploy app.yaml --promote
It seems like the docker is being built correctly but the at the end of the process I get this message:
Copying files to Google Cloud Storage...
Synchronizing files to [gs://staging.my-project-id.appspot.com/].
Updating module [default]...\Deleted [https://www.googleapis.com/compute/v1/projects/my-project-id/zones/us-central1-f/instances/gae-builder-vm-20151030t142257].
Updating module [default]...failed.
ERROR: (gcloud.preview.app.deploy) Error Response: [4] Timed out creating VMs.
I have my deployment working now. I have had to troubleshoot the same problem before, for another project, but I didn't have the code on hand, so I had to work through the problems again.
The deployment ran smoothly up until the last steps, where updating the module would timeout. This made me think it was something to do with the application starting up on VM and not responding appropriately, so the final hook would time out.
You'll find a lot of information here - https://cloud.google.com/appengine/docs/managed-vms/config . I checked the following things:
logging - ensure that you are writing to the correct log file. See https://cloud.google.com/appengine/docs/managed-vms/custom-runtimes#logging
ensure you have a .dockerignore file and are skipping files in app.yaml so you are not asking the process to copy across unneeded node_modules or log files
turn off health checking if you are not using it, or ensure you have the correct express.js routes configured for it
check that your environment variables are set and match what GAE can use. This was my final step - GAE will let you bind to a VM port on 8080. I had to pass through a NODE_ENV flag in my app.yaml which told the app to use 8080 and not 3000.
Lift the resources of the GAE instance in app.yaml. I specified two logical CPUs and made the ram 2 gig.
Good luck.

Resources