Here is my last step which is failing. From the logs, it seems trying to build the service rather using supplied argument --image-url=blah. I can see all arguments passed correctly. Any thoughts what could be reason?
- name: "gcr.io/cloud-builders/gcloud"
args:
[
"app",
"deploy",
"cmd/service-api/appengconf/dev/service-api-dev-app.yaml",
"--image-url=gcr.io/${PROJECT_ID}/service-api:${TAG_NAME}",
]
after a suggestion I added additional steps to replace images however it is still failing.
actual error exit code -1
The problem might be that the image is unavailable to App Engine... in order to push the image to App Engine, it will need to be present in the registry first. Does your Cloud Build config have a push step, between the build step and the deploy step? If not, you'll probably need to add one (you can't rely on the "images" field for this, since that pushes the images after all other steps are complete).
So, something like:
steps:
- name: "gcr.io/cloud-builders/docker"
args: ["build","-t","<foo>","."]
- name: "gcr.io/cloud-builders/docker"
args: ["push","-t","<foo>"]
- name: "gcr.io/cloud-builders/gcloud"
args: ["app","deploy",<etc>]
After digging, tinkiring ... I found that for go111 is supported via Flex env only. Since I really do not need Flex at this point and that feature, not critical rolled back to standard.
When deploying your app using a pre-built image, you should only provide the image to the gcloud app deploy command, not the app.yaml file, as shown here. Your Cloud Build step should rather be:
- name: "gcr.io/cloud-builders/gcloud"
args:
[
"app",
"deploy",
"--image-url=gcr.io/${PROJECT_ID}/service-api:${TAG_NAME}",
]
Related
Usually we have to execute gatsby build or gatsby develop in order to reflect the changes (made in the content at Contentful site), into our Gatsby site.
That can never be an acceptable solution, specially when there are multiple content writer, add or modifies content using the same Contentful account.
How to automate the build process so that every time someone Publish or Unpublish or Delete a content in the Contenful site; the Gatsby build will automatically happen and content on Gatsby site will automatically gets updated?
What you are looking for is called webhook which essentially is what you described, an action (create, update or delete) that triggers another action (i.e: gatsby build) by exposing endpoints.
Its implementation will strictly rely on the hosting platform but as you mention Contentful they expose a bunch of options for different platforms (Heroku, Netlify, CircleCI, etc).
More documentation of Contentful webhooks can be found at https://www.contentful.com/developers/docs/concepts/webhooks/
How to automate the build process so that everytime someone Publish/Unpublish content in the contenful site; the gatsby build will automatically happens and content on Gatsby site will automatically updated?
Ans:
Make a CI pipeline to your Gatsby codebase on GitLab and establish a connection with Webhook on Contetful site.
-> How it works behind the scene:
1. A webhook gets executed by calling an endpoint in GitLab, which then
2. Triggers a GitLab CI pipeline, and the
3. GitLab CI builds our static website in Gatsby, and
4. Gastby application gets the updated content from the Contentful Delivery API.
-> Steps to achieve this purpose:
Step1: Setup a trigger URL in GitLab CI pipeline
i. Go to your gitLab repo [gitlab.com/<OrganizationName>/<RepositoryName>/tree/<BranchName>]
ii. Add the .gitlab-ci.yml file
.gitlab-ci.yml
image: node:latest
cache:
paths:
- .cache/
- ./node_modules
- public/
pages:
script:
- npm install
- ./node_modules/.bin/gatsby build --prefix.paths
artifacts:
paths:
- public
only:
-gitlab-ci
Step2: Setup the Webhook URL
i. In the same gitlab page Click on "Settings"(link on the bottom of the left menu) -> CI/CD -> Pipeline Triggers -> Expand -> Enter Description as lets say 'Ips Gatsby Build' -> Add Trigger
ii. Now note the "Token" and the url[under the "User Webhook" s.a. https://gitlab.com/api/v4/projects/20273592/<BranchName>/trigger/pipeline?token=<Token>]
Step3: Setup the Gitlab Webhook in the Contentful
i. Go to Contentful CMS site -> Settings -> Webhook -> Add Webhook -> Insert the details as below
a. Name: Ips Gitlab CI Trigger
b. URL: POST the URL from Step1
c. Triggers: Select Specific triggering events
Event publish unpublish
d. Filters: EnvId(sys.environment.sys.id) equals master
e. Content Type: application/x-www-form-uriencoded; charset=utf-8
f. Payload: Use default Payload
Now click on Save
Step4: Test the "Build Automation" workflow
How to deploy a docker image from Artifactory on Google App Engine?
What I am trying to achieve is deploying my docker image that is stored on a jfrog Artifactory to a Good App Engine. Though all the examples I find are pushing the image to Artifact Registry which is redundant as I only want to store the artifact on jfrog. Has anyone tried to do it before?
Here is the further I could go using Cloud Build:
- name: 'gcr.io/cloud-builders/docker'
dir: /workspace/app
args: [ 'pull', 'myjfrogurl.jfrog.io/$PROJECT_ID:$BRANCH_NAME' ]
Then I use terraform later to deploy:
resource "google_app_engine_flexible_app_version" "app_deploy" {
version_id = "v1"
service = var.service_name
runtime = "nodejs"
...
deployment {
container {
# Here is the problem as it needs to be a google URI
image = "myjfrogurl.jfrog.io/${var.project_id}:${var.branch_name}"
}
}
Maybe there is a way of doing that, it doesn't need to be via terraform or cloud build.
Edit
With the following code is possible to pull the image from jfrog and push to Container Registry where it will be visible for App Engine or Cloud Run, though as the answer says it is not possible to keep the image stored in only one place
# Pull from external repository
- name: 'gcr.io/cloud-builders/docker'
args: [ 'pull', 'myjfrogurl.jfrog.io/$PROJECT_ID:$BRANCH_NAME' ]
# Do a fast build using --cache-from
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build',
'-t', 'gcr.io/$PROJECT_ID/appName:$BRANCH_NAME',
'--cache-from', 'gcr.io/$PROJECT_ID/appName:$BRANCH_NAME',
'.' ]
# Tag the image for Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['tag',
'gcr.io/$PROJECT_ID/appName:$BRANCH_NAME']
# Push to the Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/appName:$BRANCH_NAME']
Posting Guillaume Blaquiere's comment as a Community Wiki answer for better visibility for the community:
This is not possible for App Engine, and there is the same limitation with Cloud Run.
To deploy an image to the App Engine it requires to push the image to the Google Container Registry. Under the hood container registry is a GCP bucket called eu.artifacts.projectId.appspot.com or artifacts.projectId.appspot.com (according to the region - more). Artifacty Registry is a service under the Container Registry to help managing the images.
I'm trying to provide a .well-known folder under my Google App Engine Application I'm using the standard environment and the python27 runtime.
with a web-app-origin-association.json file to try the Progressive Web Apps as URL Handlers
origin trial from chrome.
I've added the following code to my app.yaml file under handlers:
# .well-known Ordner
- url: /.well-known/(.*)
static_files: well-known/\1
upload: well-known/.*
The folder in my project is named well-known without a dot cause I've read that there are problems when using a folder Name with a dot at the start of the foldername.
But the url https://example.com/.well-known/web-app-origin-associate.json isn't available instead it works without the dot:
What do I have to change in order to make it work under https://example.com/.well-known/web-app-origin-association.json?
You can use the workaround documented at "Make skip_files rule explicit and tweak to allow .well-known/* to upload":
^(.*/)?\.(?!well-known(?:/|$)).*$
You many want to migrate to Python 3 as described in the guide:
Starting on January 1, 2020, the Python community will no longer
update, fix bugs, or patch security issues for Python 2.7. We
recommend that you update apps that are still running in the Python 2
runtime of the App Engine standard environment to the Python 3 runtime
as soon as possible.
The best way i found out about is to just do it like that:
- url: /\.well-known
static_dir: .well-known
secure: always
and use the python39 runtime.
I am trying to deploy an app with
gcloud beta app deploy
I am confronted with
ERROR: (gcloud.beta.app.deploy) Error Response: [13] App Engine Flex failed to configure resources.
Has anyone seen this error?
I've run into the same error, my problem was that i left out the part:
network:
name: your-network-name
Maybe you've got another issue, but i'd recommend to trace back changes in your app.yaml.
Also you could inspect the used app.yaml via the Google Console -> App Engine -> Versions -> last column config: view
Hope this helps!
This may have been correlated with a recent release related to the "enable_health_checks:false" parameter for applications where split_health_checks are enabled.
Could you try to deploy with "enable_health_checks:true" or "split_health_checks=false" [1] ?
[1] https://cloud.google.com/appengine/docs/flexible/nodejs/configuring-your-app-with-app-yaml#health_checks
Gonna need a little more information:
What version of the SDK are you running? "Beta" on one version is different from "beta" on another...want to make sure we're talking about the same tools.
Have you ever been able to deploy this app before? With or without beta?
What's your app.yaml file look like? Please copy/paste the code into your question (removing sensitive information, obviously).
If you add this info to your question, we should be able to get further troubleshooting.
There's a similar question that was recently responded to on Stackoverflow here: Google Cloud Storage Client not working on dev appserver
The solution was to either upgrade the SDK to 1.8.8 or use the previous revision of the GCS client library which didn't have the bug instead.
I'm currently using 1.8.8 and have tried downloading multiple revisions and /_ah/gcs doesn't load for me. After using up a significant number of my backend instances trying to understand how GCS and app engine work together, it'd be great if I could just test it on my local server instead!
When I visit localhost:port/_ah/gcs I get a 404 not found error.
Just a heads up, to install the library all I did was drag and drop the code into my app folder. I'm wondering if maybe I skipped a setup step? I wasn't able to find the answer in the documentation!
thanks!!
Note
To clarify this is my first week using GCS, so my first time trying to use the dev_server to host it.
I was able to find the google cloud storage files I wrote to a bucket locally at:
localhost:port/_ah/gcs/bucket_name/file_suffix
Where port is by default 8080, and the file was written to: /bucket_name/file_suffix
For those trying to understand the full process of setting up a simple python GAE app and testing local writes to google cloud storage:
1. Follow the google app engine "quickstart":
https://cloud.google.com/appengine/docs/standard/python/quickstart
2. Run a local dev server with:
dev_appserver.py app.yaml
3. If using python, follow "App Engine and Google Cloud Storage Sample":
https://cloud.google.com/appengine/docs/standard/python/googlecloudstorageclient/app-engine-cloud-storage-sample
If you run into "ImportError: No module named cloudstorage" you need to create a file named appengine_config.py
touch appengine_config.py
and add to it:
from google.appengine.ext import vendor
vendor.add('lib')
GAE runs this script automatically when starting your local dev server with dev_appserver.py app.yaml, and it is necessary to run this script for GAE to find the cloudstorage library in your lib/ folder
4. "Writing a file to cloud storage" from the same tutorial:
def create_file(self, filename):
"""Create a file."""
self.response.write('Creating file {}\n'.format(filename))
# The retry_params specified in the open call will override the default
# retry params for this particular file handle.
write_retry_params = cloudstorage.RetryParams(backoff_factor=1.1)
with cloudstorage.open(
filename, 'w', content_type='text/plain', options={
'x-goog-meta-foo': 'foo', 'x-goog-meta-bar': 'bar'},
retry_params=write_retry_params) as cloudstorage_file:
cloudstorage_file.write('abcde\n')
cloudstorage_file.write('f'*1024*4 + '\n')
self.tmp_filenames_to_clean_up.append(filename)
with cloudstorage.open(
filename, 'w', content_type='text/plain', options={
'x-goog-meta-foo': 'foo', 'x-goog-meta-bar': 'bar'},
retry_params=write_retry_params) as cloudstorage_file:
cloudstorage_file.write('abcde\n')
cloudstorage_file.write('f'*1024*4 + '\n')
Where filename is /bucket_name/file_suffix
4. After calling create_file via a route in your WSGI app, your file will be available at:
localhost:port/_ah/gcs/bucket_name/file_suffix
Where port is by default 8080, and the file was written to: /bucket_name/file_suffix
Postscript
Unfortunately, I did not find either 3) or 4) in their docs, so I hope this helps someone get set up more easily in the future.
To access gcs objects on dev_appserver, you must specify the bucket & object name, i.e. /_ah/gcs/[bucket]/[object].
The storage simulator for the local server is working in later versions of the SDK. For Java, one may choose to follow a dedicated tutorial: “App Engine and Google Cloud Storage Sample”.