I currently have a small Google App Engine project written in PHP. On a traditional web server, I would simply invoke this particular code via command line (e.g., php whatever.php). I'd like to set this up with cron.yaml to just run every hour or so without invoking a HTTP request if possible. How would you go about doing this?
You can do this by adding a handler url that points to your script as follows:
In your app.yaml it would look like this:
handlers:
- url: /mycron
script: cron.php
login: admin
The login parameter will only allow execution of the endpoint by Google system accounts or something like that so you enpoint won't be publicly accessible.
More information in the below link
ref: https://cloud.google.com/appengine/docs/standard/php/config/cron#securing_urls_for_cron
In order to schedule tasks using the cron.yaml file, otherwise known as cron jobs, you can use the following structure inside the file:
cron:
- description: "running my PHP code"
url: /your-app-url
target: your-service
schedule: every 60 minutes
You can edit the different cron fields to your convenience by following this syntax.
Place the cron.yaml file on the same directory as your app.yaml file (your application's root directory) and, before deploying, test it by going to http://localhost:8080/cron. If it works, you can deploy the application with the cron job by running this command:
gcloud app deploy cron.yaml
You can find additional information about cron jobs, such as how to retry failing cron jobs and securing or deleting them by following this link.
Related
I am deploying a node.js server to Google App Engine from Bitbucket pipeline environment and the last command in the script is: gcloud -q app deploy app.yaml --no-promote --verbosity=debug
The logs show that the service is deployed successfully but the script is not terminating, this is the last part of the log:
> DEBUG: Reading GCS logfile: 206 (read 10 bytes) PUSH DONE DEBUG:
> Operation [...] complete. Result: {...} DEBUG: Reading GCS logfile:
> 416 (no new content; keep polling)
> -------------------------------------------------------------------------------- DEBUG: Converted YAML to JSON: "{...}" DEBUG: Operation [...] not
> complete. Waiting to retry. Updating service [default] (this may take
> several minutes)... .DEBUG: Operation [...] not complete. Waiting to
> retry. ......DEBUG: Operation [...] not complete. Waiting to retry.
> .......DEBUG: Operation [...] not complete. Waiting to retry.
> ......DEBUG: Operation [...] not complete. Waiting to retry.
> .......DEBUG: Operation [...] not complete. Waiting to retry.
> .......DEBUG: Operation [...] not complete. Waiting to retry.
I tried to add readiness_check and liveness_check to app.yml but it didn't change the behaviour.
readiness_check:
path: "/api/public/logout"
check_interval_sec: 5
timeout_sec: 4
failure_threshold: 2
success_threshold: 2
app_start_timeout_sec: 300
liveness_check:
path: "/api/public/logout"
check_interval_sec: 30
timeout_sec: 4
failure_threshold: 2
success_threshold: 2
The main unknown here is what criteria does gcloud app deploy uses to determine termination condition?
Also, is there any bypass to this problem?
Update
The problem happens also when running the gcloud app deploy command from local environment (my laptop).
The problem does NOT happen when removing the --no-promote flag.
The gcloud app deploy command expects a well-formed and valid app.yml file, this is what determines its termination condition.
As you confirmed the deployment worked without the --no-promote flag, it could mean that something in the configuration expects the application to be already deployed and running, thus preventing the script to complete.
Another possible cause would be that the Google Cloud SDK version specified in bitbucket-pipelines.yml is an older one. Make sure you work with the latest. This consideration applies extensively to all dependencies in package.json, which might be conflicting with one another, especially when using older versions of Node.js.
This guide can help at building a sound configuration for Bitbucket-based deployments; although the example given is with Python, it might as well be used as a template for processing a Node.js pipeline.
Nb. in this solution, the Google Cloud SDK version is an older one (127.0.0), which will make this deployment fail, so it should be replaced with the latest (228.0.0 or higher). Also the guide omits another required API activation: Cloud Build API. I've notified the team to amend the solution.
I've tested several scenarios with a simple Node.js server, and could not reproduce the issue. Check my Github repository for the code.
For further help on this topic, please provide more hints, such as the content of the app.yml, bitbucket-pipelines.yml, and package.json files, as well as a description of the state of App Engine (services, versions).
In order to deploy the test repository to App Engine from Bitbucket, make sure the following is done on the project:
Enable API's:
App Engine Admin
Cloud Build
Create a Service Account with following permissions, and generate an API Key:
App Engine: Admin
Cloud Build: Editor
Storage: Object Admin
The final step of my CI/CD is the deployment using gcloud app deploy, but I can't commit the app.yaml with my environment variables, so how to deploy using cloud build passing the env variables do the app.yaml?
Here is my cloudbuild.yaml
steps:
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy"]
timeout: "1800s"
One easy option is to have your environment variables listed in a file (or even the app.yaml file itself) in Cloud Storage. You can then use the cloud-builders/gsutil to retrieve this file in a build step like this:
steps:
- name: gcr.io/cloud-builders/gsutil
args: ['cp', 'gs://mybucket/env_vars.txt', 'env_vars.txt']
This will copy the file to the /workspace directory. The next build step can then populate the app.yaml file with the environment variables (or even just copy the retrieved app.yaml file to the correct path). The next and final step would the one you mentioned to deploy the app.
Note that, when executed in the Cloud Build environment, commands are executed with credentials of the builder service account for the project. You'll need to grant access to the file on Cloud Storage to that service account.
I use node managed vm on google app engine. After I delete google compute instance at console.cloud.google.com , I see instance created automatically in "Operations". (This happens before, I used to delete instances at appengine.google.com which moved to "console" now.) How this happened? And How can I delete it?
When the instance can not be deleted it is because, either when creating an instance the protection against the deletion was checked or because after creating an instance we activated the protection from Gcloud with the following command:
$ gcloud compute instances update < INSTANCE_PATH> --deletion-protection
Sample of Instance path: projects/your-project-265315/zones/us-central1-a/instances/your-instance-v3
Solution:
Active Google Cloud Shell:
Precondition:
Request permission for the user to access the machine (regardless of the SSH connection to the instance) to avoid 403: Insufficient Permission.
$ gcloud auth login
If the deletion of the instance is protected, eliminate the protection.
$ gcloud compute instances update <INSTANCE_PATH> --no-deletion-protection
Then we delete the instance by selecting the zone correctly.
$ gcloud compute instances delete <instance-path>
GL
Preventing accidental vm deletion
Auth login
You have to delete deployed version for Flexible VM. Since it's only one version, you have to deploy another one, for standard vm.
Most simple solution would be to deploy an empty version, w/o any code, just one static file. To do that create following app.yaml:
module: default
runtime: python27
api_version: '1.0'
threadsafe: true
handlers:
- url: /
static_files: index.html
upload: index.html
resources:
cpu: 0.1
memory_gb: 0.5
disk_size_gb: 10
put an empty index.html in same dir. And deploy it using:
gcloud preview app deploy app.yaml
After this, you'll be able to route all traffic to this dummy version, and then delete previous version deployed for Flexible VM.
You need to delete the module from your app description. Otherwise App Engine will keep spinning new instances in accordance with the scale settings in your module description.
I have a basic appengine project with multiple modules and a dispatch.yaml:
my-project/boxes/app.yaml (default module)
my-project/users/app.yaml (users module)
my-project/dispatch.yaml
I'm trying to configure a single hourly cronjob with the following definition:
cron:
- description: hourly box purging
url: /api/boxes.purge
schedule: every 1 hours
target: default
I've tried adding it to the module it concerns, so put the above definition in file: 'my-project/boxes/cron.yaml' and running appcfg.py cron_info boxes/. My terminal seems to indicate all went well:
hourly box purging:
URL: /api/boxes.purge
Schedule: every 1 hours (UTC)
2015-04-30 10:08:00Z, 0:59:55 from now
2015-04-30 11:08:00Z, 1:59:55 from now
2015-04-30 12:08:00Z, 2:59:55 from now
2015-04-30 13:08:00Z, 3:59:55 from now
2015-04-30 14:08:00Z, 4:59:55 from now
Ye the Appengine Developer console fails to reflect this and cron jobs are not run. It does show on the local development panel.
Putting the definition in the root of the projects (besides dispatch.yaml) yields the same results. Other things i've tried (in vain): Redeploying all code, appcfg.py update_dispatch, waiting a while before refreshing the developer console.
Hopefull someone is able to help me find the obvious mistake, or confirm that their is some bug.
In the Configuration section of the doc it's stated:
Optional application-level configuration files (dispatch.yaml,
cron.yaml, index.yaml, and queue.yaml) are included in the top level
app directory.
I agree, the paragraph context appears to leave room for interpretation (typically...). But the quoted text also indicates that these files are considered app-level configs. So I'd keep them at the top.
About the update: I noticed, for example, that the index.yaml file was NOT uploaded with the rest of the multi-module app at my first deployment, I had to explicitly use appcfg.py update_indexes. This was not happening with a single module app. Maybe appcfg.py update_cron also needs to be explicit?
http://code.google.com/appengine/docs/python/tools/uploadingdata.html
Here it is explained how to download data from a gAE app,
First thing to do is setting up remote_api.
The bulk loader tool communicates with your application running on App Engine using remote_api, a request handler included with the App Engine runtime environment that allows remote applications with the proper credentials to access the datastore remotely. There are two ways to install remote_api: automatically using the builtins directive, or you manually using the url directive.
I enabled it using builtins directive:
i changed app.yaml accordingly
builtins:
- remote_api: on
Its given that this directive finds "include.yaml" file for the remote_api and maps the request handler to /_ah/remote_api. Only administrators of the application can access this URL.
but i never came across include.yaml
after that i tried downloading data using the the commands given there
appcfg.py download_data --application=<app-id> --url=http://<appname>.appspot.com/[remote_api_path] --filename=<data-filename>
i'm getting an error saying permission denied, i'm confused. i am also not able to use "create_bulkloader_config" command, getting the same error, I'm confused, Thanks
Are you using open ID / federated login for your app? The remote API does not work with open ID, but there is a workaround here:
http://blog.notdot.net/2010/06/Using-remote-api-with-OpenID-authentication
Replace
builtins:
- remote_api: on
With
- url: /remote_api
script: $PYTHON_LIB/google/appengine/ext/remote_api/handler.py
login: admin
You Should run the commandline as an admin user. The permission denied error you are getting refers to the appcfg script not being able to access a local file.