I am having problems when running cron jobs on App Engine.
I have an App Engine Flex custom application running (using php:7.0-apache)
I also have an URL that I can call to run my job, let's say myapp.com/cacheupdate.php, when I point to that URL everything works fine, as the cache is updated correctly.
So I added a cron job:
cron:
- description: "Update Cache"
url: /cacheupdate.php
schedule: every 30 minutes
The cron job shows up in the console but always gives an error. So I added a handler for it in my app.yaml file:
handlers:
- url: /updatecache.php
script: /cacheupdate.php
I have tried a few different ways to specify the source URL, but the problem persists.
I'm assuming the issue here is that I'm using a custom Docker image to build the instances, is there a better way to run cron jobs or have I missed something?
Related
I'm running a webservice on Google App Engine. It's a simple webserver which has a few routes. none of these are /nginx_metrics. This is my app.yaml file:
runtime: custom
env: flex
automatic_scaling:
min_num_instances: 1
max_num_instances: 2
cool_down_period_sec: 180
cpu_utilization:
target_utilization: 0.6
resources:
cpu: 4
memory_gb: 4
disk_size_gb: 10
I run this and I see a constant stream of requests which seem to be hitting /nginx_metrics, and the logs say that it's responded with a 200 status. I'm not sure where this is coming from since I've not given my application any sort of nginx instance. It doesn't really bother me, but I'd like to read my logs without this, and I'm unable to do so.
I get a stream of this:
2022-03-28 04:00:37 default[20220328t092456] "GET /nginx_metrics" 200
2022-03-28 04:00:52 default[20220324t171711] "GET /nginx_metrics" 200
And even my app logs seem to be prefixed with default. How do I fix this?
The /nginx_metrics endpoint is called by GAE to retrieve the metrics from the customer Flex VMs. That endpoint is not exposed publicly, it's exposed on the docker bridge network 172.17.0.1 but you can't send requests to /nginx_metrics from the appspot URL (you may want to check this)
That path is targeting one of the sidecar containers that are deployed along with your app (which is on another container on each instance). That container is the opentelemetry-collector one, you can check it by SSHing into a flex instance. If you want to check the source of the container it should be running something similar to : https://github.com/GoogleCloudPlatform/appengine-sidecars-docker/tree/main/opentelemetry_collector.
Google is aware of this issue and that the /nginx_metrics is logged in nginx request logs, ( which is not an intended behavior) and we are working on it. You can expect the fix to be resolved on the next Flex runtime update.
Coming to your second question, default service being prefixed for every logs :
If you do not specify any service name while deploying your app or while entering the gcloud command like : gcloud app deploy instead of gcloud app deploy service-name-app.yaml your app will get deployed in another version of the default service. That is why you would see default[some-numbers] prefixed to each of your successful logs where default is the service and [20220328t092456] is the version-name that tells you have deployed this version on 28th March,2022.
Initially, I deployed my React app (created with create-react-app) to Github Pages and it works fine. However, after a few changes to the src files, I wanted to update the website so I decided to re-deploy the app using npm run deploy and it finishes with Published being printed at the end of the command. On Github, the actions shows that the build is successful, but it's not able to deploy, giving me an error code of 400.
Complete error log from Github is as follow:
Actor: github-pages[bot]
Action ID: 1996792679
Artifact URL: https://pipelines.actions.githubusercontent.com/P4vIRQYdOzrk38NoGmJNrF5GvwW7S92VAUyJNinMXyLtZzPbIB/_apis/pipelines/workflows/1996792679/artifacts?api-version=6.0-preview
{"count":1,"value":[{"containerId":354579,"size":3092480,"signedContent":null,"fileContainerResourceUrl":"https://pipelines.actions.githubusercontent.com/P4vIRQYdOzrk38NoGmJNrF5GvwW7S92VAUyJNinMXyLtZzPbIB/_apis/resources/Containers/354579","type":"actions_storage","name":"github-pages","url":"https://pipelines.actions.githubusercontent.com/P4vIRQYdOzrk38NoGmJNrF5GvwW7S92VAUyJNinMXyLtZzPbIB/_apis/pipelines/1/runs/2/artifacts?artifactName=github-pages","expiresOn":"2022-06-15T05:21:40.7473658Z","items":null}]}
Creating deployment with payload:
{
"artifact_url": "https://pipelines.actions.githubusercontent.com/P4vIRQYdOzrk38NoGmJNrF5GvwW7S92VAUyJNinMXyLtZzPbIB/_apis/pipelines/1/runs/2/artifacts?artifactName=github-pages&%24expand=SignedContent",
"pages_build_version": "8e6a4594c3e946a3f32ab67af68f527ec66ffc90",
"oidc_token": "***"
}
Failed to create deployment for 8e6a4594c3e946a3f32ab67af68f527ec66ffc90.
{"message":"Deployment request failed for 8e6a4594c3e946a3f32ab67af68f527ec66ffc90 due to in progress deployment. Please cancel c1852e5059b99567d48405d0610990fdc25f0946 first or wait for it to complete.","documentation_url":"https://docs.github.com/rest/reference/repos#create-a-github-pages-deployment"}
Error: Error: Request failed with status code 400
Error: Error: Request failed with status code 400
Sending telemetry for run id 1996792679
I'm very confused with what the error is, has anyone countered a 400 error code before while deploying to Gh Pages?
Additional Note: If you need any additional information, please do comment below as this is my first time deploying react app to github pages so I would love to help you in helping me
UPDATE
You can visit the github repo here: cynclar.github.io
I haven't found a solution, but I have a workaround. If you go to the last working workflow run in the Actions tab (look for a green checkmark), you can click Re-run all jobs, which should deploy your webpage for you, including the latest changes.
Hope this works for the time being until there is a better solution!
GitHub has fixed the issue:
If you navigate to the Actions tab, find the last deployment, and Re-run all jobs, it should work. I have tried this myself and it has been successful!
I'll leave the last answer up in case this doesn't work for anyone.
I had the same issue and this didn't work, re-running the old deploy used the same code and didn't have my changes. I deleted all the workflows from the github Actions tab and then pushing again triggered a new deployment that worked.
We have a web application (frontend) using React created from Create React App. The app is running on Google Cloud Platform App Engine Standard. Our web application is code splitted. Each page is then loaded on user navigation.
It's working really well. The issue we have is for example user A is on the app home page. We deploy a fix that change the chunk file name. The user A then try to access another page and then got the error Loading chunk * failed. The url to get the file is now returning a 404 because the file has been replaced by some new chunk files.
It's a frequent problem as I can see during my research but I didn't find a solution that apply for Google App Engine.
Here's an article that explain the problem / solution: https://mitchgavan.com/code-splitting-react-safely/
I would like to use the solution "Solution 1: Keep old files on your server after a deployment" but I can't see how to do this using GCP ...
Here's the app.yaml file
service: frontend
runtime: nodejs14
env: standard
instance_class: F1
handlers:
- url: /(.*\..+)$
static_files: build/\1
upload: build/(.*\..+)$
- url: /.*
static_files: build/index.html
upload: build/index.html
We have the following dispatch file (* for masked url)
dispatch:
- url: "*"
service: frontend
- url: "www.*"
service: frontend
Haven't tried this before but see if it makes sense/works.
We have a blog article about downloading your source code from GAE. It contains an explanation of where your source is stored when you deploy (a staging bucket), how long it stays there and how you can modify how long it stays before Google automatically deletes it.
When you deploy to GAE, gcloud only deploys files that have changed (it doesn't deploy those that haven't). Since you now have 'new' files because new hashes were generated, the older files no longer exist on your local machine. I do not know if Google will automatically delete those files from the staging location in bullet 1 above.
My proposal here is that you follow the steps in the blog article (from bullet 1) and alter (change) how long the files are retained in your staging bucket. Another option is to check the retention policy tab and see if you can change the rule so the files don't get deleted. If you're able to alter how long the files remain or the retention policy, it just might solve your problem. Let me know if this works
Our website is hosted on Google App Engine: https://www.boutir.com/
Without any code/dns/config changes today, the static files like .js/.css suddenly fail to load. The network inspector shows that the file is pending forever. Occasionally though the files would load successfully. How do we solve the issue?
It is interesting to note that if we use the PROJECT_ID.appspot.com domain, there would be no such issues.
Our app.yaml looks something like this:
runtime: python27
handlers:
- url: /js
static_dir: pages/js
secure: always
Similar issue reported in the past was observed to be related to the DNS configuration of the Custom Domain. Particularly since you confirm that the Static Contents are being served as expected with the appspot domain.
With that being said, there was an internal GCP issue reported earlier suggesting that requests for App Engine endpoints that serve static content may be failing with Customers seeing ”204 No Response” with 0 bytes data, or request hangs indefinitely. I have information that this issue is now resolved, however.
I have a functional Go app which I've been running locally for months. Got setup with Google Cloud, did a test run to a live domain, everything works.
Looking back at my local machine, I want to run a local Google AppEngine server (instead of running my Go app directly). It runs, however I'm trying to use the "login: required" parameter in app.yaml, and I see the login form at localhost:8080, however no matter what email I input, it keeps timing out with 503 errors.
My app.yaml:
application: myapp-dev
env: flex
runtime: go
api_version: go1
handlers:
- url: /
script: _go_app
login: required
Command I use to run the local app:
dev_appserver.py app.yaml
Flexible environment doesn't support 'login' features via app.yaml (external to whatever regular login you'd do in your app).
Standard environment app.yaml doc DOES list 'login' features: https://cloud.google.com/appengine/docs/standard/go/config/appref
Flexible environment app.yaml doc DOES NOT list 'login' features: https://cloud.google.com/appengine/docs/flexible/go/configuring-your-app-with-app-yaml
But more specifically, a page talking about upgrading from Standard-to-Flex, mentions that the login handlers for flex have been deprecated:
https://cloud.google.com/appengine/docs/flexible/go/upgrading
The login setting under handlers is now deprecated for the App Engine
flexible environment. You should follow the guidance for User service
migration.
So basically, with flex environment, there is no project-wide login controls possible outside of your app. You have to let the app initialize and then do normal authentication/authorization.
For my own project, I wanted a quick app-wide level of security so I could provide guest accounts and have them see what a public not-logged-in view of my app would be. Yes I can do the same within my app, I just wanted to save some work.