Why appengine taksqueues are forced into a lowering processing rate? - google-app-engine

While executing tasks I see the following in the admin console in the 'Task queues' section. This is a paid app and am prepared to use more resources and pay for them. Any clue what might be causing it:
"To conserve system resources during peak usage, App Engine is enforcing a processing rate lower than the maximum rate for this queue"

Your app is not scaling properly.
Hover your mouse over the question mark at the top-right of the console screen. You will most likely see a tooltip message like:
"App Engine is enforcing a processing rate lower than the maximum rate for this queue either because your application is returning HTTP 503 codes or because currently there is no instance available to execute a request."
So, check your logs for 503's. Check your resource settings to make sure your app can properly handle the traffic.

Related

Google App Engine: debugging Dashboard > Traffic > Sent

I have an GAE app PHP72, env: standard which is hanging intermittently (once or twice a day for about 5 mins).
When this occurs I see a large spike in GAE dashboard's Traffic Sent graph.
I've reviewed all uses of file_get_contents and curl_exec within the app's scripts, not including those in /vendor/, and don't believe these to be the cause.
Is there a simple way in which I can review more info on these outbound requests?
There is no way to get more details in that dashboard. You're going to need to check your logs at the corresponding times. Obscure things to check for:
Cron jobs coming in at the same times
Task Queues spinning up

How to programmatically scale up app engine?

I have an application which uses app engine auto scaling. It usually runs 0 instances, except if some authorised users use it.
This application need to run automated voice calls as fast as possible on thousands of people with keypad interactions (no, it's not spam, it's redcall!).
Programmatically speaking, we ask Twilio to initialise calls through its Voice API 5 times/sec and it basically works through webhooks, at least 2, but most of the time 4 hits per call. So GAE need to scale up very quickly and some requests get lost (which is just a hang up on the user side) at the beginning of the trigger, when only one instance is ready.
I would like to know if it is possible to programmatically scale up App Engine (through an API?) before running such triggers in order to be ready when the storm will blast?
I think you may want to give warmup requests a try. As they load your app's code into a new instance before any live requests reach that instance. Thus reducing the time it takes to answer while you GAE instance has scaled down to zero.
The link I have shared with you, includes the PHP7 runtime as I see you are familiar with it.
I would also like to agree with John Hanley, since finding a sweet spot on how many idle instances you have available, would also help the performance of your app.
Finally, the solution was to delegate sending the communication through Cloud Tasks:
https://cloud.google.com/tasks/docs/creating-appengine-tasks
https://github.com/redcall-io/app/blob/master/symfony/src/Communication/Processor/QueueProcessor.php
Tasks can try again hitting the app engine in case of errors, and make the app engine pop new instances when the surge comes.

In App Engine Flexible can the nginx.health_check logging be disabled?

App Engine Flexible creates an nginx.health_check log. It logs all health check requests, not just failed health checks. If your health check interval is under 10 seconds the log can grow to multiple gigs in just a few days. Is there any way to configure it to only record failed checks, or disable the log altogether?
The only way to disable this right now is to completely disable health checking (which I wouldn't recommend). We're looking at ways to fix this - apologies!

The API call datastore_v3.Put() required more quota than is available

How to reset quota if Datastore Write Operations limit is reached ?
Any operation (both from admin console and from my code) on datastore reports the following error:
The API call datastore_v3.Put() required more quota than is available.
I have tried to disable application and wait for quota reset, but it did not work.
When the app is enabled, it produces a lot of tasks that in turn try to operate on datastore, what obviously consumes the quota.
Now, I have paused the task queues and will give another try waiting 24 hours.
Is it the right solution ?
The quota is reset every 24h, so wait that time or enable billing. The quota won't reset by disabling and reenabling the application.
You should assign a daily budget to your app even with billing enabled.
Maybe you forgot to do this.
goto cloud console, select project,
goto Compute > App Engine > Settings in the left side nav bar.
and set a daily budget.

Google App Engine RemoteApiServlet/remote_api handler errors

Recently, i have come across an error (quite frequently) with the RemoteApiServlet as well as the remote_api handler.
While bulk loading large amounts of data using the Bulk Loader, i start seeing random HTTP 500 errors, with the following details (in the log file):
Request was aborted after waiting too long to attempt to service your request.
This may happen sporadically when the App Engine serving cluster is under
unexpectedly high or uneven load. If you see this message frequently, please
contact the App Engine team.
Can someone explain what i might be doing wrong? This errors prevents the Bulk Loader from uploading any data further, and I am having to start all over again.
Related thread in Google App Engine forums is at http://groups.google.com/group/google-appengine-python/browse_thread/thread/bee08a70d9fd89cd
This isn't specific to remote_api. What's happening is that your app is getting a lot of requests that take a long time to execute, and App Engine will not scale up the number of instances your app runs on if the request latency is too high. As a result, requests are being queued until a handler is available to serve them; if none become available, a 500 is returned and this message is logged.
Simply reduce the rate at which you're bulkloading data, or decrease the batch size so remote_api requests execute faster.

Resources