Created a new Flexible Environment App and made successful deploy with latest GCloud version (current is 133) but get "503 Server Error" without any error logs.
Source code: https://github.com/AIMMOTH/scala-stack-angular/tree/503-error
App link: https://scala-stack-angular-us.appspot.com
Error page:
Error: Server Error
The service you requested is not available yet.
Please try again in 30 seconds.
Version info:
Version Status Traffic Allocation Instances Runtime Environment Size Deployed Diagnose
20161108t190158 Serving 100 % 2 custom Flexible
I had a filter responding to /_ah/* and broke Google App Engine.
For me, it was because of wrong settings in app.yaml:
vm: true # the flexible environment
runtime: java # Java 8 / Jetty 9.3 Runtime
service: default
threadsafe: true # handle multiple requests simultaneously
resources:
cpu: .5 # number of cores
memory_gb: 1.3
disk_size_gb: 10 # minimum is 10GB and maximum is 10240GB
health_check:
enable_health_check: true
check_interval_sec: 5 # time interval between checks (in seconds)
timeout_sec: 4 # health check timeout interval (in seconds)
unhealthy_threshold: 2 # an instance is unhealthy after failing this number of consecutive checks
healthy_threshold: 2 # an unhealthy instance becomes healthy again after successfully responding to this number of consecutive checks
restart_threshold: 60 # the number of consecutive check failures that will trigger a VM restart
automatic_scaling:
min_num_instances: 1
max_num_instances: 1
cool_down_period_sec: 120 # time interval between auto scaling checks. It must be greater than or equal to 60 seconds.
# The default is 120 seconds
cpu_utilization:
target_utilization: 0.5 # CPU use is averaged across all running instances and is used to decide when to reduce or
# increase the number of instances (default 0.5)
handlers:
- url: /.* # regex
script: ignored # required, but ignored
secure: always # https
beta_settings:
java_quickstart: true # process Servlet 3.1 annotations
use_endpoints_api_management: true # enable Google Cloud Endpoints API management
I removed use_endpoints_api_management: true and everything works fine.
Related
I deployed a nodejs server to app engine flex.
I am using websockets and should expect around 10k concurrent connections at the moment.
At around 2000 websockets connections I get this error:
[alert] 33#33: 4096 worker_connections are not enough
There is no permanent way to edit the nginx configuration in a nodejs runtime.
Isn't 2k connections on on instance quite low?
My yaml config file :
runtime: nodejs
env: flex
service: web
network:
session_affinity: true
resources:
cpu: 1
memory_gb: 3
disk_size_gb: 10
automatic_scaling:
min_num_instances: 1
cpu_utilization:
target_utilization: 0.6
As per this public issue 1 public issue 2 increasing the number of instance night increase the worker_connections. Also reducing the memory of each instance allow the instance scale up at a lower threshold. This may help with keeping the number of open connections below 4096 which should be constant across any size instance. You can change the worker_connections value for one instance SShing into the VM.
Nginx config is located in /tmp/nginx/nginx.conf and you can manually change it as follows:
sudo su
vi /tmp/nginx/nginx.conf #Make your changes
docker exec nginx_proxy nginx -s reload
Apart from the above workaround there is another PIT for implementation of a feature to allow users to change nginx.conf settings. and feel free to post there if you have any other queries.
I deployed a simple nodejs server on Google app engine flex.
When it has 1 instance running, it is getting 3 times as much liveness and readyness checks as it should be reiceving considering the configuration on my app.yml file.
The documentation says:
If you examine the nginx.health_check logs for your application, you might see health check polling happening more frequently than you have configured, due to the redundant health checkers that are also following your settings. These redundant health checkers are created automatically and you cannot configure them.
Still this does look like an aggressive behaviour. Is this normal?
My app.yml config :
runtime: nodejs
env: flex
service: web
resources:
cpu: 1
memory_gb: 3
disk_size_gb: 10
automatic_scaling:
min_num_instances: 1
cpu_utilization:
target_utilization: 0.6
readiness_check:
path: "/readiness_check"
timeout_sec: 4
check_interval_sec: 5
failure_threshold: 2
success_threshold: 1
app_start_timeout_sec: 300
liveness_check:
path: "/liveness_check"
timeout_sec: 4
check_interval_sec: 30
failure_threshold: 2
success_threshold: 1
Yes, this is normal. Three different locations are checking health of your service. You have configured the health check to be every five seconds. If you want less health check traffic, change check_interval_sec: 5 to be a larger number.
I set a cache-control on my server of 1 year.
How to say to the AppEngine "clear !" to take a new version from the server ?
The configuration is Flex custom environment
runtime: custom
env: flex
env_variables:
writecontrolEnv: 'prod'
handlers:
- url: /.*
script: this field is required, but ignored
service: gateway-prod
automatic_scaling:
min_num_instances: 1
max_num_instances: 2
resources:
cpu: 1
memory_gb: 2
disk_size_gb: 10
skip_files:
- node_modules/
network:
instance_tag: gateway
Assuming that your app is the one serving the static files then the cache parameters sent by the server are controlled by your application code. Which means that once you deploy a new version with updates parameters the server will send the updated values.
But the problem is that caching is actually performed by the client (or some middle-man network device), so the end user will not reach to the server until the (very long in your case) cache expiration time is reached, so it won't see the update until then.
You can try to clear your browser cache, hoping that the browser was the one doing the cache-ing.
To prevent such occurrences in the future you may want to choose a shorter cache expiration time or use some cache busting technique like this one.
I have a Flask app that deploys fine in the Google App Engine Flexible environment but some new updates have made it relatively resource intensive (Was receiving a [CRITICAL] Worker Timeout message.) In attempting to fix this issue I wanted to increase the number of CPUs for my app.
app.yaml:
env: flex
entrypoint: gunicorn -t 600 --timeout 600 -b :$PORT main:server
runtime: python
threadsafe: false
runtime_config:
python_version: 2
automatic_scaling:
min_num_instances: 3
max_num_instances: 40
cool_down_period_sec: 260
cpu_utilization:
target_utilization: .5
resources:
cpu: 3
After some time I receive:
"Updating service [default] (this may take several minutes)...failed.
ERROR: (gcloud.app.deploy) Error Response: [13] An internal error occurred during deployment."
Is there some sort of permission issue preventing me from increasing the CPUs? Or is my app.ymal invalid?
You can not set the number of cores(CPU) to odd numbers except 1. It should be even.
We are facing issues with taskqueues after recently updating the Endpoints API to version 2 in Google App Engine - Python. Following are the issues faced with respect to taskqueus,
Taskqueue doesn't get added to the queue at all , Just gets ignored and never executed.
Taskqueue get terminated with error - "Process terminated because the backend was stopped."
The most critical error is the first one where the task is just ignored and not added to the queue itself.
Details on the codebase and logs are attached along.
It would be great if someone can help us out here.
app.yaml (Server Settings)
#version: 1
runtime: python27
api_version: 1
threadsafe: true
instance_class: F4
automatic_scaling:
min_idle_instances: 1
max_idle_instances: 4 # default value
min_pending_latency: 500ms # default value
max_pending_latency: 900ms
max_concurrent_requests: 50
queue.yaml
- name: allocateStore
rate: 500/s
bucket_size: 500
max_concurrent_requests: 1000
retry_parameters:
task_retry_limit: 0
Adding task to queue:
taskqueue.add(queue_name='allocateStore', url='/tasksStore/allocateStore')
Thanks,
Navin Lr