My app engine yaml file is somewhat like below
service: servicename
runtime: php74
automatic_scaling:
min_idle_instances: 2
max_pending_latency: 1s
env_variables:
CLOUD_SQL_CONNECTION_NAME: <MY-PROJECT>:<INSTANCE-REGION>:<MY-DATABASE>
DB_USER: my-db-user
DB_PASS: my-db-pass
DB_NAME: my-db
---
automatic scaling cause higher cost? what is the cheapest configuration I can set. it's not mandatory to have auto scaling at current stage of my application.
I think your cheapest configuration is just setting max_instances: 1 and commenting out the other options.
When you have traffic, the maximum number of instances that you will have will be 1. When there's no traffic, your instance goes down (effectively 0).
The downside with this approach (not having min_idle_instance as you currently do) is that brand new traffic to your site will take some time because of the time for your instance to be started.
Related
I am in the process of migrating an existing Spring Boot app to GAE. The app is automatically scaled having a minimum of 0 instances. During my test runs I have noticed that initially the first instance is loaded and starts to handle requests. As auto-scaling kicks in, a second instance is launched. The traffic after the second instance is up and running, are all being sent to that second instance. Meantime, the first instance remains idle after having run the first handful of the request that prompted the auto scale.
I'm fairly new at tuning, I'm probably missing something.
app.yaml:
runtime: java17
instance_class: F2
automatic_scaling:
target_cpu_utilization: 0.8
min_instances: 0
max_instances: 4
min_pending_latency: 8s
max_pending_latency: 10s
max_concurrent_requests: 25
env_variables:
SPRING_PROFILES_ACTIVE: "prod"
This is the image of both instances after handling 100 requests in 5 minutes. the majority of the requests are being routed to the second instance that was autoscaled.
I have tried changing some of the autoscale values. I am expecting the GAE load balancer to distribute the load among all available instances. But I don't see an even distribution of traffic. Why is that?
Some weeks ago my app on App Engine just started to increase the number of idle instances to an unreasonable high amount, even when there is close to zero traffic. This of course impacts my bill which is skyrocketing.
My app is simple Node.js application serving a GraphQL API that connects to my CloudSQL database.
Why are all these idle instances being started?
My app.yaml:
runtime: nodejs12
service: default
handlers:
- url: /.*
script: auto
secure: always
redirect_http_response_code: 301
automatic_scaling:
max_idle_instances: 1
Screenshot of monitoring:
This is very strange behavior, as per the documentation it should only temporarily exceed the max_idle_instances.
Note: When settling back to normal levels after a load spike, the
number of idle instances can temporarily exceed your specified
maximum. However, you will not be charged for more instances than the
maximum number you've specified.
Some possible solutions:
Confirm in the console that the actual app.yaml configuration is the same as in the app engine console.
Set min_idle_instances to 1 and max_idle_instances to 2 (temporarily) and redeploy the application. It could be that there is just something wrong on the scaling side, and redeploying the application could solve this.
Check your logging (filter app engine) if there is any problem in shutting down the idle instances.
Finally, you could tweak settings like max_pending_latency. I have seen people build applications that take 2-3 seconds to start up, while the default is 30ms before another instance is being spun up.
This post suggests setting the following, which you could try:
instance_class: F1
automatic_scaling:
max_idle_instances: 1 # default value
min_pending_latency: automatic # default value
max_pending_latency: 30ms
Switch to basic_scaling, let Google determine the best scaling algorithm (last resort option). This would look something like this:
basic_scaling:
max_instances: 5
idle_timeout: 15m
The solution could of course also be a combination of 2 and 4.
Update after 24 hours:
I followed #Nebulastic suggestions, number 2 and 4, but it did not make any difference. So in frustration I disabled the entire Google App Engine (App Engine > Settings > Disable application) and left it off for 10 minutes and confirmed in the monitoring dashboard that everything was dead (sorry, users!).
After 10 minutes I enabled App Engine again and it booted only 1 instance. I've been monitoring it closely since and it seems (finally) to be good now. And now after the restart it also adheres to the "min" and "max" idle instances configuration - the suggestion from #Nebulastic. Thanks!
Screenshots:
Have you checked to make sure you dont have a bunch of old versions still running? https://console.cloud.google.com/appengine/versions
check for each service in the services dropdown
I am noticing something a little odd with Google App Engine. If my app has not been used and I go open it I notice that it takes some time to load, I also see in the GAE logs console that it is starting up a server during this time so that accounts for the wait (why not always have an instance running?)
After I open and close the app a couple of times I then notice in the versions tab of GAE that I have 7 running instances (all in the same version).
Im a little confused how GAE works, does it roll down your instances to 0 when there is no requests for a while and then on the flip side, does it spin up a new instance for every new client connecting ?
my app.yaml is looking like this:
runtime: nodejs10
env: standard
instance_class: F2
handlers:
- url: /.*
secure: always
redirect_http_response_code: 301
script: auto
You need to fine tune your App Engine scaling strategy, for example please check this app.yaml file
runtime: nodejs10
env: standard
instance_class: F2
handlers:
- url: /.*
secure: always
redirect_http_response_code: 301
script: auto
automatic_scaling:
min_instances: 1
max_instances: 4
min_idle_instances: 1
max_concurrent_requests: 25
target_throughput_utilization: 0.8
inbound_services:
- warmup
min_instances & min_idle_instances are set to 1 in order to have almost 1 instance ready for incoming requests and avoid cold start.
To avoid spin up new instances too fast, you can set max_concurrent_requests & target_throughput_utilization, in this example a new instance will be spin up until an instance reaches 20 concurrent requests (25 X 0.8)
As is mentioned in this document, it is necessary create a warmup endpoint in your application and add inbound_services in your app.yaml file, for example:
app.get('/_ah/warmup', (req, res) => {
// Handle your warmup logic. Initiate db connection, etc.
});
warmup calls carry the benefit of prepare your instances before an incoming request and reduce the latency of first request.
As you did not specify any scaling setting in your app.yaml, App Engine is using automatic scaling.
That means that the application has 0 minimum instances so when your app is not receiving any request at all it will scale down to 0. With that option you will sabve the costs that imply having an instance running all the time, but also cold starts will happen. A cold start happens each time a request reaches your application but there are no instances ready to serve it and a new one has to be created.
Regarding your application scaling up to 7 instances when the traffic load increases, it depends again on the workload that is receiving. You can control this behaviour as well by using the max_instances setting, although using a low value could affect your application's performance if more instances are needed.
App Engine will be spinning up new instances if the threshold value on target_cpu_utilization, target_throughput_utilization , max_concurrent_requests, max_pending_latency or min_pending_latency is reached. You can read about all of them here.
App working with standard environment app engine, python 3.7 and cloud sql (Mysql)
Checking the logs there are some with very high latencies (more than 4 seconds), when the expected are 800ms. All these logs are accompanied by this message:
"This request caused a new process to be started for your application,
and thus caused your application code to be loaded for the first time.
This request may thus take longer and use more CPU than a typical
request for your application."
I understand that when it refers to a new process it refers to the deployment of a new instance (since I use automatic scaling) however the strange thing is that when comparing these logs with the deployment of instances in some cases it matches but in others it does not.
My question is, how can these latencies be reduced?
The app engine config is:
runtime: python37
env: standard
instance_class: F1
handlers:
- url: /static/(.*)
static_files: static/\1
require_matching_file: false
upload: static/.*
- url: /.*
script: auto
secure: always
- url: .*
script: auto
automatic_scaling:
min_idle_instances: automatic
max_idle_instances: automatic
min_pending_latency: automatic
max_pending_latency: automatic
network: {}
As you note, these slower requests happen whenever app engine needs to start a new instance for your application, as the initial load is slow (these are called "loading requests").
However, App Engine does provide a way to use "warmup" requests -- basically, dummy requests to your application to start instances in advance of when they are actually needed. This can reduce, but not eliminate the user-affecting loading requests.
This can slightly increase your costs, but it should reduce the loading request latency as these dummy requests will be the ones that eat the cost of starting a new instance.
In the python 3.7 runtime, you can add a "warmup" element to the inbound_services directive in app.yaml:
inbound_services:
- warmup
This will send a request to /_ah/warmup where, if you want, you can do any other initialization the instance needs (e.g. starting a DB connection pool).
There are more strategies that may help you decrease your latencies in your application.
You can modify your automatic_scaling options in order to use something that may suit better for your app.
You can manage better your bandwidth by setting the appropriate Cache-Control header on your responses and set reasonable expiration times for static files.
Using public Cache-Control headers in this way will allow proxy servers and your clients' browser to cache responses for the designated period of time.
You can use bigger instance class like F2 in order to avoid horizontal scaling happening so often. As I understood from this issue, your latencies increase mostly while new instances are deployed.
You can, also enable concurrent requests and write your code as asynchronously as you can.
I have a pretty basic app.yaml file with the following:
runtime: nodejs
env: flex
service: front
And everytime I deploy the application, the deployment take a very long time in the step:
Updating service [front] (this may take several minutes)...
When I check in the console, I can see that it goes up from 1 instances to 2 even if I didn't specify anything about the number of instances. Why is Google doing this ? and how can we set the starting number of instances without disabling the autoscaling feature ? Thanks in advance !
On App Engine Flexible applications, the minimum number of instances given to your service defaults to 2 to reduce latency. This is documented here.
You can configure these settings differently by adding them in your app.yaml file like this:
runtime: nodejs
env: flex
service: front
automatic_scaling:
min_num_instances: 1 // Default is 2. Must be 1 or greater
max_num_instances: 10 // Default is 20.