Rolling restarts are causing are app engine app to go offline. Is there a way to change the config to prevent that from happening? - google-app-engine

About once a week our flexible app engine node app goes offline and the following line appears in the logs: Restarting batch of VMs for version 20181008t134234 as part of rolling restart. We have our app set to automatic scaling with the following settings:
runtime: nodejs
env: flex
beta_settings:
cloud_sql_instances: tuzag-v2:us-east4:tuzag-db
automatic_scaling:
min_num_instances: 1
max_num_instances: 3
liveness_check:
path: "/"
check_interval_sec: 30
timeout_sec: 4
failure_threshold: 2
success_threshold: 2
readiness_check:
path: "/"
check_interval_sec: 15
timeout_sec: 4
failure_threshold: 2
success_threshold: 2
app_start_timeout_sec: 300
resources:
cpu: 1
memory_gb: 1
disk_size_gb: 10
I understand the rolling restarts of GCP/GAE, but am confused as to why Google isn't spinning up another VM before taking our primary one offline. Do we have to run with a min num of 2 instances to prevent this from happening? Is there a way I get configure my app.yaml to make sure another instance is spun up before it reboots the only running instance? After the reboot finishes, everything comes back online fine, but there's still 10 minutes of downtime, which isn't acceptable, especially considering we can't control when it reboots.

We know that it is expected behaviour that Flexible instances are restarted on a weekly basis. Provided that health checks are properly configured and are not the issue, the recommendation is, indeed, to set up a minimum of two instances.
There is no alternative functionality in App Engine Flex, of which I am aware of, that raises a new instance to avoid downtime as a result of a weekly restart. You could try to run directly on Google Compute Engine instead of App Engine and manage updates and maintenance by yourself, perhaps that would suit your purpose better.

Are you just guessing this based on that num instances graph in the app engine dashboard? Or is your app engine project actually unresponsive during that time?
You could use cron to hit it every 5 minutes to see if it's responsive.
Does this issue persist if you change cool_down_period_sec & target_utilization back to their defaults?
If your service is truly down during that time, maybe you should implement a request handler for liveliness checks:
https://cloud.google.com/appengine/docs/flexible/python/reference/app-yaml#updated_health_checks
Their default polling config would tell GAE to launch within a couple minutes
Another thing worth double checking is how long it takes your instance to start up.

Related

Why are idle instances not being shut down when there is no traffic?

Some weeks ago my app on App Engine just started to increase the number of idle instances to an unreasonable high amount, even when there is close to zero traffic. This of course impacts my bill which is skyrocketing.
My app is simple Node.js application serving a GraphQL API that connects to my CloudSQL database.
Why are all these idle instances being started?
My app.yaml:
runtime: nodejs12
service: default
handlers:
- url: /.*
script: auto
secure: always
redirect_http_response_code: 301
automatic_scaling:
max_idle_instances: 1
Screenshot of monitoring:
This is very strange behavior, as per the documentation it should only temporarily exceed the max_idle_instances.
Note: When settling back to normal levels after a load spike, the
number of idle instances can temporarily exceed your specified
maximum. However, you will not be charged for more instances than the
maximum number you've specified.
Some possible solutions:
Confirm in the console that the actual app.yaml configuration is the same as in the app engine console.
Set min_idle_instances to 1 and max_idle_instances to 2 (temporarily) and redeploy the application. It could be that there is just something wrong on the scaling side, and redeploying the application could solve this.
Check your logging (filter app engine) if there is any problem in shutting down the idle instances.
Finally, you could tweak settings like max_pending_latency. I have seen people build applications that take 2-3 seconds to start up, while the default is 30ms before another instance is being spun up.
This post suggests setting the following, which you could try:
instance_class: F1
automatic_scaling:
max_idle_instances: 1 # default value
min_pending_latency: automatic # default value
max_pending_latency: 30ms
Switch to basic_scaling, let Google determine the best scaling algorithm (last resort option). This would look something like this:
basic_scaling:
max_instances: 5
idle_timeout: 15m
The solution could of course also be a combination of 2 and 4.
Update after 24 hours:
I followed #Nebulastic suggestions, number 2 and 4, but it did not make any difference. So in frustration I disabled the entire Google App Engine (App Engine > Settings > Disable application) and left it off for 10 minutes and confirmed in the monitoring dashboard that everything was dead (sorry, users!).
After 10 minutes I enabled App Engine again and it booted only 1 instance. I've been monitoring it closely since and it seems (finally) to be good now. And now after the restart it also adheres to the "min" and "max" idle instances configuration - the suggestion from #Nebulastic. Thanks!
Screenshots:
Have you checked to make sure you dont have a bunch of old versions still running? https://console.cloud.google.com/appengine/versions
check for each service in the services dropdown

App running in Google App Engine fails, tries ah_start for minutes, then restarts

I have a message processor task that runs in the app engine. There are many times that it appears to die, then go into a long (several minutes) log trying to do ah_start, then finally restarts.
This task responds to messages from the message queue, then writes data from these messages to a mySql database.
Looking at the log histogram, it appears that this task is in a 15 minute cycle, where it works for a bit, then does this ah_start loop for a bit, then goes back to working.
When I start sending a heavy load of messages to process, it looses messages which is not an optimal situation for a production environment.
I really don't know even where to check to find out what is going on.
I am sorry but search as I can I really can not find good information on how to use the _ah/start process. A good link to to an explanation and example would to worth a lot.
My process is very simple,
start up
wait for message
store data in data base
ack message
go back to wait for next message
Here is a copy of my app.yaml file:
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
service: message-processor
runtime: nodejs10
env_variables:
BUCKET_NAME: "stans_temp"
handlers:
- url: /stylesheets
static_dir: stylesheets
- url: /.*
secure: always
redirect_http_response_code: 301
script: auto
Thanks for any help.
I would start with correcting syntax errors in app.yaml.
As I can see: runtime: nodejs10 and there is no env: flex settings this seems to be App Engine Standard environment. (app.yaml for standard reference)
However I can see that you have resources setting with is only for App Engine Flexible. (app.yaml for flexible reference)
App Engine Flex and App Engine Standard are practically two different products, so you need to decide which one you want to use. The article about it you may find here. This might be reason, I am even surprised that this was deployed successfully.

Unable to export to Monitering service because: GaxError RPC failed, caused by 3

I have a Java Application in App Engine, and recently I started getting following error:
Unable to export to Monitering service because: GaxError RPC failed, caused by 3:One or more TimeSeries could not be written: Metrics cannot be written to gae_app. See https://cloud.google.com/monitoring/custom-metrics/creating-metrics#which-resource for a list of writable resource types.: timeSeries[0]
and this happens every time after health check log:
Health checks: instance=instanceName start=2020-01-14T14:28:07+00:00 end=2020-01-14T14:28:53+00:00 total=18 unhealthy=0 healthy=18
and after some time my instances would be restarted and the same thing starts to happen again.
app.yaml:
#https://cloud.google.com/appengine/docs/flexible/java/reference/app-yaml
#General settings
runtime: java
api_version: '1.0'
env: flex
runtime_config:
jdk: openjdk8
#service: service_name #Required if creating a service. Optional for the default service.
#https://cloud.google.com/compute/docs/machine-types
#Resource settings
resources:
cpu: 2
memory_gb: 6 #memory_gb = cpu * [0.9 - 6.5] - 0.4
# disk_size_gb: 10 #default
##Liveness checks - Liveness checks confirm that the VM and the Docker container are running. Instances that are deemed unhealthy are restarted.
liveness_check:
path: "/liveness_check"
timeout_sec: 20 #1-300 Timeout interval for each request, in seconds.
check_interval_sec: 30 #1-300 1-300Time interval between checks, in seconds.
failure_threshold: 6 #1-10 An instance is unhealthy after failing this number of consecutive checks.
success_threshold: 2 #1-10 An unhealthy instance becomes healthy again after successfully responding to this number of consecutive checks.
initial_delay_sec: 300 #0-3600 The delay, in seconds, after the instance starts during which health check responses are ignored. This setting can allow an instance more time at deployment to get up and running.
##Readiness checks - Readiness checks confirm that an instance can accept incoming requests. Instances that don't pass the readiness check are not added to the pool of available instances.
readiness_check:
path: "/readiness_check"
timeout_sec: 10 #1-300 Timeout interval for each request, in seconds.
check_interval_sec: 15 #1-300 Time interval between checks, in seconds.
failure_threshold: 4 #1-10 An instance is unhealthy after failing this number of consecutive checks.
success_threshold: 2 #1-10 An unhealthy instance becomes healthy after successfully responding to this number of consecutive checks.
app_start_timeout_sec: 300 #1-3600 The maximum time, in seconds, an instance has to become ready after the VM and other infrastructure are provisioned. After this period, the deployment fails and is rolled back. You might want to increase this setting if your application requires significant initialization tasks, such as downloading a large file, before it is ready to serve.
#Service scaling settings
automatic_scaling:
min_num_instances: 2
max_num_instances: 3
cpu_utilization:
target_utilization: 0.7
The error is caused by an upgrade of the stackdriver logging sidecar to 1.6.25 version, which starts to push FluentD metrics to Stackdriver monitoring via OpenCensus. However the integration with App Engine Flex doesn't work yet.
These errors should be logs only. It is not relative to the health check logs. It should not impact VM restart. If your VM instances are restarted frequently, there may be caused by some other reason. In Stackdriver logging UI, you can search Free disk space under vm.syslog stream and unhealthy sidecars under vm.events stream. If some logs show up, your instances restart may be caused by low free disk size or any unhealthy sidecar containers.

Running min 1 instance of Google-App-Engine in standard environment

Looking at the Google-App-Engine's two environments, standard and flex, most of the features offered by standard seem more appropriate for my use case.
According to https://cloud.google.com/appengine/docs/the-appengine-environments, both standard and flex environment support automatic scaling while standard can scale to 0 instances and flex can scale to 1 instance.
According to https://cloud.google.com/appengine/docs/standard/nodejs/config/appref, an option for automatic scaling is specifying the min/max number of instances running at any given moment. I would have thought that this would 'override' standard environment's ability to scale to zero, but after my service had seen no traffic in 15 hours, it still closed the last remaining instance.
I have the following config-settings in my app.yaml file.
runtime: nodejs10
automatic_scaling:
min_instances: 1
max_instances: 1 # Increase in production
target_cpu_utilization: 0.95
I was trying to force GAE to have 1 running instance at any time while in testing. I realize that having a static number of instances running is not the point of automatic scaling, but I plan to increase the maximum number of instances when moving to production. I have also tried adding min_idle_instances: 1 to the settings without any difference.
Can standard environment be forced to have a minimum of 1 running instance at any time?
A way to ensure that your instance is ready to serve is to configure warm up request.
Bear in mind that even with Warm up request, you might encounter loading request. If your app has no traffic, the first request will always be a loading request and not a warm up. Thus, in my opinion the best way to approach a situation like this is to set 2 min_instances.
Example of an express.js handler:
js
const express = require('express');
const app = express();
app.get('/_ah/warmup', (req, res) => {
// Handle your warmup logic. Initiate db connection, etc.
});
// Rest of your application handlers.
app.get('/', handler);
app.listen(8080);
Example of app.yaml addition:
inbound_services:
- warmup
A workaround it could be to use cron job that triggers every minute, so your instance it will be available to serve you. However, even with this approach 2 min_instance is a better solution.

AppEngine NodeJS flexible spawns 2 instances after deployment

I have a pretty basic app.yaml file with the following:
runtime: nodejs
env: flex
service: front
And everytime I deploy the application, the deployment take a very long time in the step:
Updating service [front] (this may take several minutes)...
When I check in the console, I can see that it goes up from 1 instances to 2 even if I didn't specify anything about the number of instances. Why is Google doing this ? and how can we set the starting number of instances without disabling the autoscaling feature ? Thanks in advance !
On App Engine Flexible applications, the minimum number of instances given to your service defaults to 2 to reduce latency. This is documented here.
You can configure these settings differently by adding them in your app.yaml file like this:
runtime: nodejs
env: flex
service: front
automatic_scaling:
min_num_instances: 1 // Default is 2. Must be 1 or greater
max_num_instances: 10 // Default is 20.

Resources