App Engine Node flexible instance constantly running - google-app-engine

I have several Firebase-Queue NodeJS instances running in my project on App Engine.
The instances seems to be constantly running, producing errors after trying a GET request.
13:33:36.078
{"method":"GET","latencySeconds":"0.000","referer":"-","host":"-","user":"-","code":"502","remote":"130.211.0.96","agent":"GoogleHC/1.0","path":"/_ah/health","size":"166"}
13:33:36.421
{"method":"GET","latencySeconds":"0.000","referer":"-","host":"-","user":"-","code":"502","remote":"130.211.1.229","agent":"GoogleHC/1.0","path":"/_ah/health","size":"166"}
13:33:37.000
[error] 32#0: *80631 connect() failed (111: Connection refused) while connecting to upstream, client: 130.211.1.11, server: , request: "GET /_ah/health HTTP/1.1", upstream: "http://172.18.0.2:8080/_ah/health", host: "10.128.0.5"
13:33:37.000
[error] 32#0: *80633 connect() failed (111: Connection refused) while connecting to upstream, client: 130.211.3.85, server: , request: "GET /_ah/health HTTP/1.1", upstream: "http://172.18.0.2:8080/_ah/health", host: "10.128.0.5"
My App.yaml file when i deploy my Node apps looks like this:
runtime: nodejs
env: flex
service: album-queue
skip_files:
- ^(node_modules)
handlers:
- url: .*
script: build/index.js
I think it must have something to do with all these these GET requests it's trying to do internally, but I don't know how to stop them / fix it. My bills are raking up fairly quickly, so would be pretty nice to get it fixed >_<

App Engine Flex doesn't scale to zero like the standard environment does. There will always be at least one instance running (default is actually 2). The requests you see are the normal health checks.

Related

discord bot errors (500) every 14 min being hosted on Google App Engine

I have a discord bot that is being hosted on Google App Engine. It will work and run, and then roughly every ~14 min, the bot goes offline, and I see these errors:
Upon further review of the error logs, this is the output:
logMessage: "This request caused a new process to be started for your application, and thus caused your application code to be loaded for the first time. This request may thus take longer and use more CPU than a typical request for your application."
severity: "INFO"
time: "2021-10-03T16:29:18.831860Z"
}
1: {
logMessage: "The warmup request failed. Please check your warmup handler implementation and make sure it's working correctly."
severity: "INFO"
time: "2021-10-03T16:29:18.831862Z"
}
2: {
logMessage: "Process terminated because it failed to respond to the start request with an HTTP status code of 200-299 or 404."
severity: "ERROR"
time: "2021-10-03T16:29:18.831863Z"
My app.yaml file is as follows:
runtime: python38
instance_class: B1
manual_scaling:
instances: 1
entrypoint: python3 bot.py
I'm quite new to GCP and hosting web services, so I am quite lost. Any help here is deeply appreciated.
You need to provide a url handler for /_ah/start. (Might as well also provide for /_ah/stop and /_ah/warmup too). Those are calls GAE will make to start and stop your app. They should return an http response of 200. Here is an example, in Flask:
#app.route('/_ah/start')
#app.route('/_ah/stop')
#app.route('/_ah/warmup')
def warmup():
# Handle your warmup logic here, e.g. set up a database connection pool
return '', 200, {}
EDIT: Valid responses are 200–299 or 404

Server times out when using HTTPS instead of HTTP?

I'm running a remote react server with a NGINX load balancer.
When I use HTTP, I have no issues. But when I set HTTPS=True in my react environment, I get bad gateway when I try connecting the webpage.
The NGINX log says the following:
2021/07/06 10:01:07 [error] 10365#0: *62 upstream prematurely closed connection while reading response header from upstream, client: 155.4.218.180, server: _, request: "GET /favicon.ico HTTP/1.1", upstream: "http://10.50.3.152:80/favicon.ico", host: "remdent.com", referrer: "http://remdent.com/"
And 3 more similar messages claiming upstream was prematurely closed.
How do I remedy this?

502 Bad Gateway Nginx error while deploying Django Channels app using Daphne on Google App Engine (Flexible)

My Django app was working fine on Google App Engine (Flexible) using gunicorn as entrypoint in the app.yaml file. I needed to add websockets to it so I used Django Channels (with redis). This works beautifully on my local machine (Windows 10).
For deployment, I changed my entrypoint to daphne on port 8080 since that's the default on GAE (using $PORT produces the same effect), so my yaml file now looks like this:
runtime: python
env: flex
runtime_config:
python_version: 3
entrypoint: daphne -b 127.0.0.1 -p 8080 my_project_name.asgi:application
I've checked my .asgi file and requirements.txt to ensure everything is ok and the packages are the latest versions.
But after deploying it, I get a "502 Bad Gateway Nginx" error.
The Stackdriver logs (nginx.error) on the GCP cloud console say the below:
[error] 33#33: *341 connect() failed (111: Connection refused) while connecting
to upstream, client: 172.xxx.xxx.xxx, server: , request: "GET / HTTP/1.1",
upstream: "172.17.0.1:8080", host: "my_project_name.appspot.com"
I don't recognize those IPs for upstream server or client, and I don't know what to do next. I've tried numerous things over the last 4 days, including:
using various different ports (8000, 8001 etc)
adding an nginx.conf file (based on this documentation) in my project directory, which seems to make no difference
Adding a line in the runtime_config section of the yaml file that says "nginx_conf_http_include: nginx.conf"
4.Using Unix sockets to start the daphne server in the entrypoint like "entrypoint: daphne -u /tmp/daphne.sock my_project_name.asgi:application
Deleting the entrypoint altogether after declaring the daphne server in the nginx.conf file
None of this helps. The logs stay the same, the error stays the same. I've read SO questions like this and this but I don't know how to apply them to GAE Flex since I'm not directly operating the VM instance. Please help.

GCP Extensible Service Proxy encounters error when forwarding request

I have a the following setup:
1. Application (Java microservice) deployed on app engine.
2. Custom domain mapped to hit this service:.
myfavmicroservice.project-amazing.dev.corporation.com
3. This endpoint is secured to require authentication by enabling IAP.
4. Configured ESP to intercept, authenticate and fulfill request to all
backend microservices (like above) with a common gateway endpoint.
5. Microservice is deployed using app.yaml.
6. ESP endpoint is configured using api.yaml (OpenAPI API Surface document)
This is the tutorial I am following:
https://cloud.google.com/endpoints/docs/openapi/get-started-app-engine-standard
app.yaml to deploy the microservice:
runtime: java11
entrypoint: java -jar tar/worker.jar
instance_class: F2
service: myfavmicroservice
handlers:
- url: /.*
script: this field is required, but ignored
The ESP api.yaml for describing microservice api surface is like this
swagger: "2.0"
info:
title: "My fav micro Service"
description: "Serve my favorite microservice content"
version: "1.0.0"
# This field will be replaced by the deploy_api.sh script.
host: microservice-system-gateway-5c4s43dedq-ue.a.run.app
schemes:
- https
produces:
- application/json
paths:
/myfavmicroservice:
get:
summary: Greet the user
operationId: hello
description: "Get helloworld mainpage"
x-google-backend:
address: https://myfavmicroservice.project amazing.dev.corporation.com
jwt_audience: .....
responses:
'200':
description: "Success."
schema:
type: string
'400':
description: "The IATA code is invalid or missing."
schema:
type: string
But the problem is that whenever I make request to endpoint like this:
GET
https://microservice-system-gateway-5c4s43dedq-ue.a.run.app/myfavmicroservice
I always get gateway 500 error. Upon inspection of ESP logs I am finding primarily
1. SSL Handshake Error with Error no 40
2. upstream server temporarily disabled while SSL handshaking to upstream
3. request: "GET /metadatasvc-hello HTTP/1.1", upstream: "https://[3461:f4f0:5678:a13::63]:443/myfavmicroservice
So the ESP is intercepting my request correctly, perhaps forwarding the request in correct format as well as evidenced from #3. But I am getting SSL error.
Why am I getting this error?
Ok figured out the issue. For the benefit of stackoverflow community I am posting the solution here.
I figured that if you use custom domains that you map to app engine like this in the OpenAPI Configuration (That you deploy to ESP), SSL handshake fails:
x-google-backend:
address: https://my-microservice.my-custom-domain.company.com
However if you use the default URL that is assigned by APP Engine upon startup of the microservice like this, everything is fine:
x-google-backend:
address: https://my-microservice.appspot.com
So I am trying to figure out how to use custom domain mappings in ESP OpenAPI configuration. For now though, if I do that the SSL proxying is not working inside ESP.

Not able to connect to accounts.google.com anymore

Last week we are getting errors from accounts.google.com.
How can we fix the problem? We use GAE standard environment with python 2.7 and requests 2.18+
We are getting this one during about 5 days:
HTTPSConnectionPool(host='accounts.google.com',
port=443):
Max retries exceeded with url: /o/oauth2/token
(Caused by NewConnectionError('urllib3.connection.VerifiedHTTPSConnection object
at 0xfaa13790: Failed to establish a new connection: [Errno 110]
connection timed out',))
I see you are using urllib3. Make sure you have ssl enabled in your app.yaml:
libraries:
- name: ssl
version: latest
GAE deleted ssl version 2.7 recently.

Resources