We have a mobile application running off go on Google app engine. We suddenly started having issues with random shut downs from 16th October 2019. First we saw 5xx errors and increased the minimum instances. Then the error started showing up as container called exit(1). And now, it has changed to terminated process with error exit status 1.
Error log from nginx
This is our current app.yaml setup:
runtime: go111
env: standard
instance_class: F2
automatic_scaling:
min_instances: 35
max_instances: 35
min_idle_instances: 5
max_idle_instances: 5 # default value
min_pending_latency: 300ms # default value
max_pending_latency: automatic
max_concurrent_requests: 50
target_cpu_utilization: 0.8
The Nginx.conf file
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 650;
keepalive_requests 10000;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Logs will appear on the Google Developer's Console when logged to this
# directory.
access_log /var/log/app_engine/app.log;
error_log /var/log/app_engine/app.log;
gzip on;
gzip_disable "msie6";
server {
# Google App Engine expects the runtime to serve HTTP traffic from
# port 8080.
listen 8080;
root /usr/share/nginx/www;
index index.html index.htm;
}
}
The application log before the latest shutdown:
2019-11-18 00:30:41.815 EAT
{"textPayload":"","insertId":"5dd1bc01000c72b428121c93","resource":{"type":"gae_app","labels":{"project_id":"gumzo-backend-223809","version_id":"20191114t210636","module_id":"default","zone":"us6"}},"timestamp":"2019-11-17T21:30:41.815796Z","labels":{"clone_id":"00c61b117ceb00f18aae14afcc55fa0fe6…
{
insertId: "5dd1bc01000c72b428121c93"
labels: {
clone_id: "00c61b117ceb00f18aae14afcc55fa0fe65cfc93726dd8a793451bb6d9c4181ec5fc32"
}
logName: "projects/gumzo-backend-223809/logs/stdout"
receiveTimestamp: "2019-11-17T21:30:42.163592840Z"
resource: {
labels: {
module_id: "default"
project_id: "gumzo-backend-223809"
version_id: "20191114t210636"
zone: "us6"
}
type: "gae_app"
}
textPayload: ""
timestamp: "2019-11-17T21:30:41.815796Z"
}
2019-11-18 03:15:22.550 EAT
Container called exit(1).
{
insertId: "5dd1e29a000864b036cdfc81"
labels: {
clone_id: "00c61b117ced9f166f1fc9560966f8c22ba2187c3d884e466556f0d93d1beb9d7be735db63"
}
logName: "projects/gumzo-backend-223809/logs/varlog%2Fsystem"
receiveTimestamp: "2019-11-18T00:15:22.555202950Z"
resource: {
labels: {
module_id: "default"
project_id: "gumzo-backend-223809"
version_id: "20191114t210636"
zone: "us6"
}
type: "gae_app"
}
severity: "WARNING"
textPayload: "Container called exit(1)."
timestamp: "2019-11-18T00:15:22.550012017Z"
}
Anyone know on how we can keep the application running without multiple shutdowns? Thanks in advance.
add warmup requests to your App.yaml file:
inbound_services:
- warmup
Also, I suggest you change the target_cpu_utilization: 0.6 (which is the default). This will let your instances to start serving as soon as the CPU usage reaches 60%. This will increase your App performance but increase cost.
Related
I deployed a nodejs server to app engine flex.
I am using websockets and should expect around 10k concurrent connections at the moment.
At around 2000 websockets connections I get this error:
[alert] 33#33: 4096 worker_connections are not enough
There is no permanent way to edit the nginx configuration in a nodejs runtime.
Isn't 2k connections on on instance quite low?
My yaml config file :
runtime: nodejs
env: flex
service: web
network:
session_affinity: true
resources:
cpu: 1
memory_gb: 3
disk_size_gb: 10
automatic_scaling:
min_num_instances: 1
cpu_utilization:
target_utilization: 0.6
As per this public issue 1 public issue 2 increasing the number of instance night increase the worker_connections. Also reducing the memory of each instance allow the instance scale up at a lower threshold. This may help with keeping the number of open connections below 4096 which should be constant across any size instance. You can change the worker_connections value for one instance SShing into the VM.
Nginx config is located in /tmp/nginx/nginx.conf and you can manually change it as follows:
sudo su
vi /tmp/nginx/nginx.conf #Make your changes
docker exec nginx_proxy nginx -s reload
Apart from the above workaround there is another PIT for implementation of a feature to allow users to change nginx.conf settings. and feel free to post there if you have any other queries.
using push queues and flexible environment on Google AppEngine I get 403 (Forbidden) error when a task (to be executed on backend service), created with default service, is executed. The task is successfully pushed to queue, confirmed locally, but the execution of the task(s) fails with log:
INFO 2020-12-24 13:42:39,897 module.py:865] default: "POST /tasks/test-handler HTTP/1.1" 403 31
WARNING 2020-12-24 13:42:39,897 taskqueue_stub.py:2158] Task task2 failed to execute. The task has no remaining retries. Failing permanently after 1 retries and 0 seconds
The same happens both locally and on production. However, if a task is created with a cron job then the execution works just fine. I am using dev_appserver.py with Go 1.11 with the following .yaml definitions:
# backend service
service: backend
runtime: go111
instance_class: F2
inbound_services:
- warmup
- default
handlers:
- url: /tasks/.*
login: admin
redirect_http_response_code: 301
# default app service
service: default
runtime: go111
instance_class: F2
inbound_services:
- warmup
handlers:
- url: /api/.*
script: auto
secure: always
redirect_http_response_code: 301
Initial API request comes to an /api endpoint which then succesfully pushes a queue message using:
t := taskqueue.NewPOSTTask(taskURL, url.Values{
"testParam": {strconv.Itoa(testParam)},
})
if _, err := taskqueue.Add(ctx, t, "test-queue"); err != nil {
return ErrPublishingTaskToQueue
}
My queue.yaml definition (in reality I have many more):
total_storage_limit: 120M
queue:
- name: test-queue
rate: 1/s
bucket_size: 100
max_concurrent_requests: 10
retry_parameters:
task_retry_limit: 1
Any ideas why I'd be getting 403 (Forbidden) statuses on task execution if a task is not created via a cron job? The documentation and existing resources on this matter do not help much :/
Managed to make it work. If anyone struggles with getting 403 responses on task execution for push queues on Google AppEngine make sure that you set the right target service. In my example above I was missing target: backend in queue.yaml:
total_storage_limit: 120M
queue:
- name: test-queue
rate: 1/s
bucket_size: 100
max_concurrent_requests: 10
target: backend
retry_parameters:
task_retry_limit: 1
The problem was that the tasks were created with default service which by default means they hit the default service, but should hit backend service. Unfortunately default service had the required endpoint deployed as well, so I got 403 instead of 404.
More details on the target field:
https://cloud.google.com/appengine/docs/standard/python/config/queueref#target
I'm attempting to use nginx as the reverse proxy to host Docusaurus v2 on Google AppEngine.
GooglAppEngine has HTTPS turned on. And Nginx listens on port 8080. Hence by default all requests are over HTTPS and the connections managed by Google AppEngine.
However, I'm having an issue when users perform the following actions :
Reach the landing page
Go to documentations (any page).
Refresh the page.
The user is getting directed to port 8080 and not the https site of docusaurus.
Without refreshing the page, the user is able to successfully navigate the site. It's when the user hits a refresh button that they get the redirect. Looking at the header information, I see the response pointing them to port 8080 but I'm not sure why that is happening.
Wondering if anyone has successfully been able to set up Docusaurus v2 with nginx ?
My config for nginx is as follow :
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Logs will appear on the Google Developer's Console when logged to this
# directory.
access_log /var/log/app_engine/app.log;
error_log /var/log/app_engine/app.log;
gzip on;
gzip_disable "msie6";
server {
# Google App Engine expects the runtime to serve HTTP traffic from
# port 8080.
listen 8080;
root /usr/share/nginx/www;
index index.html index.htm;
location / {
if ($http_x_forwarded_proto = "http") {
return 301 https://$server_name$request_uri;
}
}
}
This is probably due to the docusaurus website linking to directories without trailing slash /, causing a redirect which is setup to include the port by default.
Looking into the docusaurus build directory you will see that your pages are defined as folders containing index.html files. Without the / the server needs to redirect you to {page}/index.html.
Try to call the URL with / and no port, which should be successful:
https://{host}/docs/{page}/
Therefore fixing the problem, you could try to change the redirect rules to not include the port with the port_in_redirect parameter:
server {
listen 8080;
port_in_redirect off;
# More configuration
...
}
See the documentation for more details.
I have a a nginx server deployed in GoDaddy XLarge Cloud Server with 8GB RAM and 4 CPU. My setup for nginx is to proxy request to a Google App Engine application.
The problem is the nginx serves the static files too slow, sometimes breaking the connection rendering the website full or broken images, CSS and JS files. Now accessing the GAE app directly the static files are served really quick.
Here is my server nginx.conf file:
user www-data;
worker_processes 1;
worker_rlimit_nofile 20480; # worker_connections * 4
pid /run/nginx.pid;
events {
use epoll;
worker_connections 4096;
multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
client_body_timeout 12;
client_header_timeout 12;
keepalive_timeout 15;
send_timeout 10;
types_hash_max_size 2048;
# server_tokens off;
##
# Tweaks
# https://www.digitalocean.com/community/tutorials/how-to-optimize-nginx-configuration
##
client_body_buffer_size 10K;
client_header_buffer_size 1k;
client_max_body_size 8m;
large_client_header_buffers 2 1k;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
##
## Proxy Settings
##
proxy_buffering off;
##
# nginx-naxsi config
##
# Uncomment it if you installed nginx-naxsi
##
#include /etc/nginx/naxsi_core.rules;
##
# nginx-passenger config
##
# Uncomment it if you installed nginx-passenger
##
#passenger_root /usr;
#passenger_ruby /usr/bin/ruby;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Update
Here the Network graph for the application server:
Here the Network graph for the NGINX server (very slow):
What could be causing the slowness of nginx in this configuration?
Is this because GoDaddy Cloud Servers are slow? or something is really wrong with the NGINX configuration?
What configuration can make the proxy work fast?
Try optimise these:
1.worker process
since your have a 4 core CPU and you are serving quite a lot of files in one request, So the value of it should be at least 4, or the value of this
grep processor /proc/cpuinfo | wc -l
2.use CDN for common used js files.
I see you use some common library(jquery-1.10.2.min.js, Angular-1.4.3.js, fontawesome-webfont.wotf2 etc) directly served by GAE. These files take seconds to load. you should try to serve these files through CDN instead.
3.Do a test with Google PageSpeed Tools, It's very helpful.
The mouthful of a title says it all:
We've got an Angular frontend with a Django backend providing a REST API that is exposed independently at endpoints example.com/api/v1/*
The Angular app runs in HTML5 mode, and we want hard-links to example.com/foo/bar to bring users into the app at the foo.bar state as if it were a static page rather than an app state (where foo is anything but api).
We're running behind nginx,and our basic strategy in the conf was to define locations at ^~ /scripts, /images etc. for serving static content directly, as well as a ^~ /api/* location that gets routed to django. Below that, we have a location ~ ^/.+$ that matches any path not matched by any of the above and "sends it to Angular" - i.e. serves our index page to it and appends the path to the base url, allowing our angular router to handle it from there.
This is our conf in full:
upstream django {
server 127.0.0.1:8000 fail_timeout=0;
}
server {
listen 80;
server_name example.com;
return 301 https://example.com$request_uri;
}
server {
listen 443;
server_name example.com;
client_max_body_size 10M;
ssl on;
ssl_certificate /etc/ssl/thawte/example_com.crt;
ssl_certificate_key /etc/ssl/thawte/example_com.key;
ssl_verify_depth 2;
gzip on;
gzip_types text/plain text/html application/javascript application/json;
gzip_proxied any;
index index.html;
location ^~ /index.html {
gzip_static on;
root /www/dist;
}
location ^~ /images/ {
expires max;
root /www/dist;
}
location ^~ /scripts/ {
expires max;
gzip_static on;
root /www/dist;
}
location ^~ /favicon.ico {
expires max;
root /www/dist;
}
location ^~ /api {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto https;
proxy_redirect off;
proxy_pass http://django;
}
//Send anything else to angular
location ~ ^/.+$ {
rewrite .* /index.html last;
}
}
This has worked perfectly for us, but we now need to set it up to work with prerender.io. We've tried doing this several ways, making modifications on the official prerender nginx example, but none have worked - crawlers are getting the same code users are rather than cached pages.
How can we get this working?
(note: this is new territory for everyone involved here, so if the best way to handle this involves making different choices a few steps back, please suggest them)
So it turns out the config posted above was working the whole time.
I realized this when it finally occurred to me to try putting https://example.com/anything through the crawler debugger (instead of https://example.com, which is all I had been testing previously), and it worked - the crawler was served the cached page as expected.
This was simply because the greedy quantifier in:
location ~ ^/.+$ {
did not match the empty string. With an additional
location = / {
try_files $uri #prerender;
}
, my conf is working as expected.
Hopefully the handprint on my forehead d only been putting https://example.com through the crawler debugger - which was not working.
On the upside, I'm thinking I can turn this handprint on my forehead into a nice Halloween costume next weekend....
Still not sure I've gone about this the best way, and welcome alternative suggestions.