I am trying to run Google Cloud's Datastore emulator locally.
I ran into the issue where it was complaining that I didn't have a composite index.
(StatusCode.FAILED_PRECONDITION, no matching index found. recommended index is:<br/>- kind: taskgroups<br/> properties:<br/> - name: state<br/> - name: available_tasks<br/>)>
I modified index.yaml file in ~/.config/gcloud/emulators/datastore/WEB-INF/index.yaml to the following:
indexes:
- kind: taskgroups
properties:
- name: state
direction: asc
- name: available_tasks
direction: asc
However, I still above error after restarting the datastore emulator. I am running it with --no-store-on-disk options.
gcloud beta emulators datastore start --no-legacy --no-store-on-disk
What should be done to make sure to apply changes made to
The index.yaml should be in the application folder and not in the emulator folder
Related
I want to configure CI/CD from Cloud Repositories that builds my CMS (Directus) when I push to main repository.
In the build-time, the project needs to access Cloud SQL. But I get this error:
I tried this database configuration with gcloud app deploy and it connects Cloud SQL and runs.
cloudbuild.yaml (It crashes at second step, so I didn't add other steps for simplicity):
steps:
- name: node:16
entrypoint: npm
args: ['install']
dir: cms
- name: node:16
entrypoint: npm
args: ['run', 'start']
dir: cms
env:
- 'NODE_ENV=PRODUCTION'
- 'EXTENSIONS_PATH="./extensions"'
- 'DB_CLIENT=pg'
- 'DB_HOST=/cloudsql/XXX:europe-west1:XXX'
- 'DB_PORT="5432"'
- 'DB_DATABASE="XXXXX"'
- 'DB_USER="postgres"'
- 'DB_PASSWORD="XXXXXX"'
- 'KEY="XXXXXXXX"'
- 'SECRET="XXXXXXXXXXXX"'
Node-pg (node library) adds /.s.PGSQL.5432 at the end automatically. That's why it is not written in DB_HOST.
IAM roles:
How can I solve this error? I read so many answers in Stackoverflow but none of them helped me. I found this article but I didn't fully understand how to implement it in my case (https://cloud.google.com/sql/docs/postgres/connect-build).
Without your full Cloud Build yaml, it's hard to say for sure - but, it looks like you aren't following the steps in the documentation correctly.
Roughly what you should be doing is:
Downloading the cloud_sql_proxy into your container space
In a follow up step, start the cloud_sql_proxy then (in the same step) run your script, connecting to the proxy via either tcp or unix socket.
I don't see your yaml describing the proxy at all.
I'm having a Project called "RnD" (with the ID: 1111111) in the Google Cloud where all Repositories and the CloudBuild Triggers are.
Now i want to run a CloudBuild Trigger in the "RnD" Project which then Deploys to the App Engine in Project "X" (with the ID: 99999999). I gave the CloudBuild service Account in the "RnD" Project the following permission in Project "X":
App Engine Admin
Service Account User
Project Browser
in the RnD Project App Engine is active and configured. On the RnD Project not since its not used there.
and this is my cloudbuild.yaml file:
steps:
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
dir: 'api'
entrypoint: 'bash'
args: ['-c', 'gcloud config set project ${_TARGET_PROJECT_NAME} && gcloud config set app/cloud_build_timeout 1600 && gcloud app deploy ']
timeout: '1600s'
_TARGET_PROJECT_NAME is a Substitution configured on the Trigger and the value is the name of the Project "X".
Running an build returns the following logs.
starting build "xxxxxxxxxx"
FETCHSOURCE
hint: Using 'master' as the name for the initial branch. This default branch name
hint: is subject to change. To configure the initial branch name to use in all
hint: of your new repositories, which will suppress this warning, call:
hint:
hint: git config --global init.defaultBranch <name>
hint:
hint: Names commonly chosen instead of 'master' are 'main', 'trunk' and
hint: 'development'. The just-created branch can be renamed via this command:
hint:
hint: git branch -m <name>
Initialized empty Git repository in /workspace/.git/
From https://source.developers.google.com/p/rnd/r/my_reponame
* branch xxxxxxxxxxxx -> FETCH_HEAD
HEAD is now at xxxxxx
BUILD
Pulling image: gcr.io/google.com/cloudsdktool/cloud-sdk
Using default tag: latest
latest: Pulling from google.com/cloudsdktool/cloud-sdk
0bc3020d05f1: Already exists
a5178f1195d4: Pulling fs layer
... blah blah
cc6c9aaa8146: Pull complete
Digest: sha256:xxxxxxxxx
Status: Downloaded newer image for gcr.io/google.com/cloudsdktool/cloud-sdk:latest
gcr.io/google.com/cloudsdktool/cloud-sdk:latest
Updated property [core/project].
WARNING: You do not appear to have access to project [X] or it does not exist.
Updated property [app/cloud_build_timeout].
API [appengine.googleapis.com] not enabled on project [1111111].
Would you like to enable and retry (this will take a few minutes)?
(y/N)?
ERROR: (gcloud.app.deploy) User [1111111#cloudbuild.gserviceaccount.com] does not have permission to access apps instance [X] (or it may not exist): App Engine Admin API has not been used in project 1111111 before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/appengine.googleapis.com/overview?project= 1111111 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.
- '#type': type.googleapis.com/google.rpc.Help
links:
- description: Google developers console API activation
url: https://console.developers.google.com/apis/api/appengine.googleapis.com/overview?project= 1111111
- '#type': type.googleapis.com/google.rpc.ErrorInfo
domain: googleapis.com
metadata:
consumer: projects/1111111
service: appengine.googleapis.com
reason: SERVICE_DISABLED
ERROR
ERROR: build step 0 "gcr.io/google.com/cloudsdktool/cloud-sdk" failed: step exited with non-zero status: 1
Looks like i had to activate the "App Engine" on the RnD Project too. Which somehow makes sense the more i think about it.
In addition to that i had to give the Cloud Build Service Account in the Project "X" more permission. I did not yet figure out the minimum permission set for this Service Account. It works if i'm giving the service Account Project Owner rights (which i shouldn't i know ;) ).
Greetings stackoverflow community! First time asker, long time user.
I am testing out my cloudbuild.yaml file locally using Cloud Build Local component and Secret Manager and it is failing on "availableSecrets".
Error message: Error loading config file: unknown field "availableSecrets" in cloudbuild.Build
OS Platform: Windows 10/WSL2/Ubuntu 18.04
cloud-build-local: v0.5.2
Docker engine: v20.10.2
Nodejs version: v14.15.3
NPM version: 6.14.9
gcloud version: 326.0.0
Installed components: [BigQuery Command Line Tool, Cloud Datastore Emulator, Cloud SDK Core Libraries, Cloud Storage Command Line Tool, Google Cloud Build Local Builder, gcloud Beta Commands]
Documentation on Cloud Build build file: https://cloud.google.com/cloud-build/docs/build-config
Documentation to configure secrets with cloud build: https://cloud.google.com/cloud-build/docs/securing-builds/use-secrets
Documentation for cloud build local: https://cloud.google.com/cloud-build/docs/build-debug-locally
Steps performed:
Added secrets to Secret Manager
Enabled API between Cloud Build and Secrets Manager
Added cloudbuild service account as member of each secret password.
Added IAM permission Secret Manager Secrets Accessor to cloudbuild user. I don't know where I got this info from but it is residual at this point from other attempts to use Secret Manager with cloudbuild. I am not sure of the difference between applying access here vs applying to the Secret Manager secret.
Command: cloud-build-local --config=cloudbuild.staging.yaml --dryrun=false .
cloudbuild.staging.yaml:
- name: gcr.io/cloud-builders/npm
entrypoint: 'npm'
args: [ 'install' ]
- name: 'gcr.io/cloud-builders/gcloud'
args: ["app", "deploy"]
env:
- 'DAO_FACTORY=datastore'
- 'POLL_INTERVAL=15'
- 'PROMPT=staging>'
- 'ENVIRONMENT=staging'
- 'NAMESPACE=staging'
- 'RESET_DATASTORE=false'
secretEnv: ['ADMIN_USER', 'SUPER_ADMINS', 'BOT_TOKEN']
availableSecrets:
secretManager:
- versionName: projects/{project token}/secrets/SYSTEM_USER/versions/1
env: 'ADMIN_USER'
- versionName: projects/{project token}/secrets/SUPER_ADMINS/versions/1
env: 'SUPER_ADMINS'
- versionName: projects/{project token}/secrets/BOT_TOKEN/versions/2
env: 'BOT_TOKEN'```
Tag: cloud-build-local. I guess without reputation a meaningful tag cannot be created. Maybe an esteemed community member will create this as this may be specific to cloud-build-local only.
Support for Google Secret Manager in Google Cloud Build descriptor file is apparently very new and does not appear to be supported by cloud-build-local component at this time; please see comment from Guillaume about feature being a week old. When cloud build descriptor is ran in Cloud Build, it works fine.
I fixed a similar issue by upgrading the gcloud tool.
I'm trying to setup Ververica community edition to use NFS for artifact storage using the following values.yaml
vvp:
blobStorage:
baseUri: file:///var/nfs/export
volumes:
- name: nfs-volume
nfs:
server: "host.docker.internal"
path: "/MOUNT_POINT"
volumeMounts:
- name: nfs-volume
mountPath: /var/nfs
When deploying the flink job, using job uri below:
jarUri: file:///var/nfs/artifacts/namespaces/default/flink-job.jar
I am able to see my artifacts in the Ververica UI, however when I try to deploy the flink job it fails with the following exception:
Error: No suitable artifact fetcher found for scheme file
Full error:
Some pod containers have been restarted unexpectedly. Init containers reported the following reasons: [Error: No suitable artifact fetcher found for scheme file]. Please check the Kubernetes pod logs if your application does not reach its desired state.
If I remove the "file://" from the jobURi to just the following the job containers keep restarting without giving error.
jarUri: /var/nfs/artifacts/namespaces/default/flink-job.jar
As a side note, I also added the following to the deployment.yaml, If I set the artifact to pull from an http endpoint it does save the checkpoints correctly in the NFS, so it seems that the only problem is loading artifacts from the nfs using file:// scheme.
kubernetes:
pods:
volumeMounts:
- name: my-volume
volume:
name: my-volume
nfs:
path: /MOUNT_POINT
server: host.docker.internal
volumeMount:
mountPath: /var/nfs
name: my-volume
Ververica Platform does not currently support NFS drives for Universal Blob Storage.
However, you can emulate this behavior if using version >= 2.3.2 by mounting the NFS drive to your Flink pods as you did in the deployment spec for checkpoints. This works because 2.3.2 added support for self-contained and fetching local files. You can see more information in the documentation here
I am attempting to setup a CI Pipeline using Google Cloud Build.
I am attempting to deploy a MeteorJS app which has a lengthy build time - the default build timeout for GCB is 10 minutes and it was recommended here that I increase the timeout.
I have setup my cloudbuild.yaml file with the timeout option increased to 20 minutes:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy']
timeout: 1200s
I have a Trigger setup in GCB connected to a Bitbucket Repo and when I push a change and the Trigger fires, I get 2 new builds - one coming from Bitbucket and one whose source is Google Cloud Storage.
Once 10 minutes of build time has elapsed, the build from Cloud Storage will timeout which will cause the Bitbucket build to fail as well with Error Response: [4] DEADLINE_EXCEEDED
Occasionally, for whatever reason, the Cloud Storage build will finish in under 10 minutes which will allow the Bitbucket build to finish successfully and deploy.
If I attempt to cancel/stop the Cloud Storage build, it will also stop the Bitbucket build.
The screenshot below shows 2 attempts of the exact same build with differing results.
I do not understand where this second Cloud Storage Build is coming from, but it does not seem to be affected by the settings in my yaml file or my global GCP settings.
I have attempted to run the following commands from the gcloud CLI:
gcloud config set app/cloud_build_timeout 1200
gcloud config set builds/timeout 1200
gcloud config set container/build-timeout 1200
I have also attempted to use a high CPU build machine to speed up the process but it did not seem to have any effect.
Any insight would be greatly appreciated - I feel that I have exhausted every possible combination of Google Search keywords I can think up!
This timeout error comes from app engine deployment itself which has 10 min timeout by default.
You will need to update app/cloud_build_timeout property inside container itself like this:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args: ['-c', 'gcloud config set app/cloud_build_timeout 1200 && gcloud app deploy']
timeout: 1200s
Update
Actually simpler solution:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy']
timeout: 1200s
timeout: 1200s