We are deploying ASP.NET Core application on Appengine Flex and in Instances Summary on Dashboard page appears strange 1.9.54 appengine release as well as Flex release. What that might be?
Our app.yaml:
env: flex
runtime: aspnetcore
resources:
cpu: 8
memory_gb: 14.4
automatic_scaling:
min_num_instances: 8
max_num_instances: 20
cool_down_period_sec: 180
cpu_utilization:
target_utilization: 0.5
Your app's dashboard summary indicates your app has both:
standard env GAE instances (1.9.54 being the version of the GAE sandbox/infra running those instances), possibly from older service version(s) not yet deleted
flexible env GAE instances
You can play with the Service and/or Version selection boxes above the summary to identify which service/version those instances correspond to.
Related
I deployed a nodejs server to app engine flex.
I am using websockets and should expect around 10k concurrent connections at the moment.
At around 2000 websockets connections I get this error:
[alert] 33#33: 4096 worker_connections are not enough
There is no permanent way to edit the nginx configuration in a nodejs runtime.
Isn't 2k connections on on instance quite low?
My yaml config file :
runtime: nodejs
env: flex
service: web
network:
session_affinity: true
resources:
cpu: 1
memory_gb: 3
disk_size_gb: 10
automatic_scaling:
min_num_instances: 1
cpu_utilization:
target_utilization: 0.6
As per this public issue 1 public issue 2 increasing the number of instance night increase the worker_connections. Also reducing the memory of each instance allow the instance scale up at a lower threshold. This may help with keeping the number of open connections below 4096 which should be constant across any size instance. You can change the worker_connections value for one instance SShing into the VM.
Nginx config is located in /tmp/nginx/nginx.conf and you can manually change it as follows:
sudo su
vi /tmp/nginx/nginx.conf #Make your changes
docker exec nginx_proxy nginx -s reload
Apart from the above workaround there is another PIT for implementation of a feature to allow users to change nginx.conf settings. and feel free to post there if you have any other queries.
I have been exploring the App Engine settings for a small data science web application for 2 weeks. Since it is a personal project that bills my own wallet, I tried a few different parameters in app.yaml to reduce the "frontend instances" cost. Several changes in, I got unexpected ~10x cost surge!!! It was painful!!! In order to not waste it, I decided to learn something here to understand the behaviour :)... Don't worry, I had temporarily shut down my app ;)
Version 1 app.yaml:
service: my-app
runtime: python37
instance_class: F4
env: standard
automatic_scaling:
min_idle_instances: 1
max_idle_instances: 1
default_expiration: "1m"
inbound_services:
- warmup
entrypoint: gunicorn -b 0.0.0.0:8080 main:server
Version 1, billing result (usage.amount_in_pricing_units exported from billing account): ~100hr/day, the same as Front end Instance Hours shown from App Engine billing status.
This is understandable, because I had a F4 instance constantly runing idle that would translate into 24*4=96 frontend instance hours. Adding the instance usage from actual requests (from me only), ~100hr/day seems reasonable.
Version 2, where I intended to lower the instance class and number of instances and also made longer the default_expiration and hoping it would help the app to start quicker and some other stuff that I thought wouldn't affect much....
service: my-app
runtime: python37
instance_class: F2
env: standard
automatic_scaling:
min_instances: 1
max_instances: 1
target_cpu_utilization: 0.85
max_concurrent_requests: 80
max_pending_latency: 6s
default_expiration: "3h"
inbound_services:
- warmup
entrypoint: gunicorn -b 0.0.0.0:8080 main:server
Version 2, billing result (usage.amount_in_pricing_units exported from billing account): ~800hr+/day, ouch!!! In contrast, the Front end Instance Hours from App Engine dashboard billing status is less than 60hr/day as expected. This is where I got lost:
Why the usage from billing is so much larger than the App Engine Dashboard where do those usage come from?
Where to find and track indicators of those unaccounted usage in App Engine Dashboard etc?
2020-01-16 Solution for issue #1.
While I was waiting for Google Billing Support to come back to me, I found this:
Pricing of Google App Engine Flexible env, a $500 lesson
Namely, the past deployed versions of the app also eating frontend instance hours, which needed real world confirmation. (To my surprise, this has nothing to do with app.yaml file!!) So I deleted all the past versions of the app and let it run for two days while observing instance hours and billing records with the following app.yaml file.
service: my-app
runtime: python37
instance_class: F2
env: standard
automatic_scaling:
min_instances: 1
max_instances: 2
max_idle_instances: 1
target_cpu_utilization: 0.85
max_concurrent_requests: 80
max_pending_latency: 6s
default_expiration: "1m"
inbound_services:
- warmup
entrypoint: gunicorn -b 0.0.0.0:8080 main:server
This should have always one F2 instance running and goes up to maximum 2 instances. This time both app engine and exported billing usage hours agreed on 50 hours frontend instance hours. Yes!!! The daily cost is cut down to 1/16.
This solves the cost question #1, but #2 remains to be answered. It is very problematic that app engine dashboard is not showing all the billed usage of frontend instances. Yesterday I heard from Google Billing Support Team, the answer is not helpful (mainly talking about instance numbers in app.yaml, which doesn't help), they seem oblivious about this issue, I will have to let them know.
2020-01-31 Followup on issue #2.
Google Billing Support Team responded swiftly, acknowledged the discrepency between App Engine Dashboard vs Billing Export and agreed to ajust the billing for me. Effectively, the bills during the spiky days were refunded as a result. Kudos to them!
I recently asked a question: Deploying to google app engine failed
I got my app deployed, however, we are incurring a large sum for this project. (wiki.js using third-party DB mLab).
I'm wondering if it has to do with the config I put in app.yaml, namely, the memory expansion and resources
This is what the google support person said: to add:
resources:
cpu: 2
memory_gb: 4.0
disk_size_gb: 20
health_check:
enable_health_check: False
My app.yaml (from google console) is:
runtime: nodejs
api_version: '1.0'
env: flexible
threadsafe: true
automatic_scaling:
min_num_instances: 2
max_num_instances: 20
cpu_utilization:
target_utilization: 0.5
resources:
cpu: 2
memory_gb: 4
disk_size_gb: 20
health_check:
enable_health_check: false
Google Cloud Support person here again!
Find a detailed description of costs going to Google Cloud Platform Console home --> Billing --> View detailed charges or following this guide. Update the post with the expensive fields without sharing private data, please, so I can understand what is going on.
Find general information about quotas here.
P.S.: I found a couple of solutions and I suggested you to TRY them since they weren't working for me.
I'm trying to create a new environment on google cloud platform. It's not recognizing my instance class... and giving the error Instance class (n1-standard-1) is not recognized
entrypoint: gunicorn -b :$PORT presto.wsgi
env: flex
runtime: python
instance_class: n1-standard-1
runtime_config:
python_version: 3
n1-standard-1 is given from google's documentation so I'm not sure what's wrong here. Does it go by a different name? Thanks!
For the python flex environment, the resource config is defined in a different way.
To get the same machine as a n1-standard-1, you must add the following resource definition in your app.yaml:
resources:
cpu: 1
memory_gb: 3.75
When trying to deploy my app engine using flexible environment then i am getting error.
ERROR: (gcloud.preview.app.deploy) INVALID_ARGUMENT:
The beta setting machine_type cannot be set in an App Engine Flexible Environment deployment.
My app.yaml is given below
runtime: nodejs
#vm: true
env: flex
# [END runtime]
network:
instance_tag: app-tag
name: network-tag
instance_class: F1
automatic_scaling:
min_num_instances: 1
max_num_instances: 2
cool_down_period_sec: 60
beta_settings:
machine_type: f1-micro
handlers:
- url: /.*
script: IGNORED
secure: always
# Temporary setting to keep gcloud from uploading node_modules
skip_files:
- ^node_modules$
Also can anyone please tell me what is the difference between vm: true and env: flex because both set app engine environment to flexible ??
When changing from vm: true to env: flex you're actually switching to the latest infra version, see Upgrading to the Latest App Engine Flexible Environment Beta Release.
The machine type is no longer configured that way. Instead you'd configure a custom instance shape via its resources:
Resource settings
These settings control the computing resources. App Engine assigns a
machine type based on the amount of CPU and memory you've
specified. The machine is guaranteed to have at least the level of
resources you've specified, it might have more.
You can specify up to eight volumes of tmpfs in the resource settings.
You can then enable workloads that require shared memory via tmpfs and
can improve file system I/O.
For example:
resources:
cpu: 2
memory_gb: 1.3
disk_size_gb: 10
volumes:
- name: ramdisk1
volume_type: tmpfs
size_gb: 0.5