Bad Gateway 502 - Google App Engine - [CRITICAL] WORKER TIMEOUT - google-app-engine

I was working on deploying my web application via Google App Engine when I encountered a 502 Bad Gateway Error(Nginx). After running gcloud app logs read, I tracked the error down to be:
2020-05-12 00:15:59 default[20200511t163633] "GET /input/summary" 200
2020-05-12 00:16:38 default[20200511t163633] [2020-05-12 00:16:38 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:9)
2020-05-12 00:16:38 default[20200511t163633] [2020-05-12 00:16:38 +0000] [9] [INFO] Worker exiting (pid: 9)
2020-05-12 00:16:38 default[20200511t163633] [2020-05-12 00:16:38 +0000] [15] [INFO] Booting worker with pid: 15
2020-05-12 00:16:38 default[20200511t163633] "POST /input/summary" 502
For those wondering, my app.yaml looks like this:
runtime: custom
env: flex
runtime_config:
python_version: 3
resources:
cpu: 4
memory_gb: 16
disk_size_gb: 25
readiness_check:
app_start_timeout_sec: 900
My Dockerfile looks like this:
FROM gcr.io/google-appengine/python
RUN virtualenv /env -p python3.7
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
ADD requirements.txt /app/requirements.txt
RUN pip3 install -r /app/requirements.txt
ADD . /app
RUN apt-get update \
&& apt-get install tesseract-ocr -y
EXPOSE 8080
ENTRYPOINT ["gunicorn", "--bind=0.0.0.0:8080", "main:app"]
I am running the app through:
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0', port=8080)
Everything seems to work fine on localhost but the problems arise when I deploy to Google App Engine. Does anyone know what may be the root of the issue? Thanks in Advance!

Is it failing to deploy? Or does the deploy succeed, but the server fails to run?
If its failing to deploy, you may be hitting the 10 minute timeout that deploy jobs have to finish. You can increase this number by setting your local gcloud config.
gcloud config set app/cloud_build_timeout 1200s

Related

deploying to app engine flex custom runtime with cloud build causes a large amount of builds to run

i am Kinda new to cloud build so i am kind of confused about what is happening.
first this is my file structure
cloudbuild.yaml
backend/
Dockerfile
app.yaml
I had an application which i dockerized and deployed to app engine felx in custom runtime
here's my Dockerfile
FROM mcr.microsoft.com/dotnet/sdk:5.0-buster-slim AS build
WORKDIR /app
# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o out
FROM mcr.microsoft.com/dotnet/aspnet:5.0-buster-slim AS base
ENV ASPNETCORE_URLS=http://+:80;
WORKDIR /app
COPY --from=build /app/out .
EXPOSE 80
ENTRYPOINT ["dotnet", "myapp.dll"]
and this is my app engine flex file
runtime: custom
env: flex
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
service: backend
network:
name: my-network
subnetwork_name: my-network-subnet
instance_tag: "backend"
forwarded_ports:
I have successfully deployed this app on app engine flex using this command
gcloud app deploy --appyaml=app.yaml
Then i added a cloudbuild.yaml file following this google doc
steps:
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: 'bash'
args: ['-c', 'gcloud config set app/cloud_build_timeout 2000 && gcloud app deploy --appyaml=backend/app.yaml']
as you can see in the cloudbuild.yaml i didnt add the timeout attribute because it gave me this error each time i tried to submit the build.
Error Response: [13] Error parsing cloudbuild.yaml for runtime custom: Argument is not an object: "2000s"
after removing the timeout attribute, cloud build started behaving in a weird way, it kept creating build jobs on its own until it reached over 20 builds.
i had to stop these builds manually because it exceeded the 120 minute free quota limit.
can some one tell me if my cloudbuild.yaml is the thing causing the issue or if its a problem with google cloud.
So the problem was writing the cloudbuild as a yaml file, instead i re-wrote it as a json file, i am not entirely sure why is the cloudbuild.yaml file was giving me errors but that was my solution.
{
"steps": [
{
"name": "gcr.io/google.com/cloudsdktool/cloud-sdk",
"entrypoint": "bash",
"args": [
"-c",
"gcloud config set app/cloud_build_timeout 1600 && gcloud app deploy --appyaml=app.yaml"
]
}
],
"timeout": "1600s"
}
Also the cloudbuild and app.yaml must be in the root of the branch with the cloudbuild file and Dockerfile.

Port XXXX is in use by another program. Either identify and stop that program, or start the server with a different port

I want to run a single python flask hello world. I deploy to App Engine, but it's showing like it's saying that the port is in use and it looks like it's running on multiple instances/threads/clones concurrently.
This is my main.py
from flask import Flask
app = Flask(__name__)
#app.route('/hello')
def helloIndex():
print("Hello world log console")
return 'Hello World from Python Flask!'
app.run(host='0.0.0.0', port=4444)
This is my app.yaml
runtime: python38
env: standard
instance_class: B2
handlers:
- url: /
script: auto
- url: .*
script: auto
manual_scaling:
instances: 1
This is my requirements.txt
gunicorn==20.1.0
flask==2.2.2
And this is the logs that I got:
* Serving Flask app 'main'
* Debug mode: off
Address already in use
Port 4444 is in use by another program. Either identify and stop that program, or start the server with a different port.
[2022-08-10 15:57:28 +0000] [1058] [INFO] Worker exiting (pid: 1058)
[2022-08-10 15:57:29 +0000] [1059] [INFO] Booting worker with pid: 1059
[2022-08-10 15:57:29 +0000] [1060] [INFO] Booting worker with pid: 1060
[2022-08-10 15:57:29 +0000] [1061] [INFO] Booting worker with pid: 1061
It says that Port 4444 is in use. Initially I tried 5000 (flask's default port) but it says it's in use. Also I tried removing the port=4444 but now it's saying Port 5000 is in use by another program, I guess flask by default assign port=5000. I'm suspecting that it's because GAE is running in multiple instances that's causing this error. If not, then please help to solve this issue.
App Engine apps should listen on port 8080 and not in any other ones.
So you may need to set this like
app.run(host='0.0.0.0', port=8080)
Close the editor and then reopen. When you stop the process next time use Ctrl+C if you're on the terminal
I figured it out. Delete your old file from your terminal and or folder that created the web app. In the terminal this is done by:
rm -file_name
Then try all over again with your a fresh file and it should be okay.

How to deploy react app in Elastic beanstalk in a docker platform using Travis CI?

Environment health has transitioned from Ok to Severe. ELB processes are not healthy on all instances. ELB health is failing or not available for all instances.
I am deploying a react app in AWS using the docker platform. I am getting HEALTH-Severe issues when I deploy my app. I have also added custom TCP inbound rules in the EC2 instance (source-anywhere).
I am using free tier in AWS. The following is my Dockerfile.
FROM node:alpine as builder
WORKDIR '/app'
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx
EXPOSE 80
COPY --from=builder /app/build /usr/share/nginx/html
My .travis.yml file:
language: generic
sudo: required
services:
- docker
before_install:
- docker build -t username/docker-react -f Dockerfile.dev .
script:
- docker run -e CI=true username/docker-react npm run test
deploy:
provider: elasticbeanstalk
region: us-east-2
app: "docker-react"
env: "DockerReact-env"
bucket_name: "my bucket-name"
bucket_path: "docker-react"
on:
branch: master
access_key_id: $AWS_ACCESS_KEY
secret_access_key: $AWS_SECRET_KEY
When I open my app I am getting 502 Bad Gateway error.
I had the same problem. After reading some of the documentation here I figured maybe docker-compose.yml is actually picked up first before anything. Deleting my docker-compose.yml (which I was only using locally) solved the issue for me.

Gunicorn Command Line Error inGoogle App Engine Standard Environment

I'm getting an endless stream of the following errors after deploying a Flask/Python3/Postgres app to app engine standard environment. The warning about Psycopg2 package is of concern, but not what's causing the app to fail to run. Rather, it's the invalid command line arguments to gunicorn, which are supplied by GAE, not by me. Is anyone able to deploy a Python3 Flask that uses Postgres to the Standard environment successfully?
Here's the log file output:
2018-12-11 02:51:37 +0000] [3738] [INFO] Booting worker with pid: 3738
2018-12-11 02:51:37 default[20181210t140744] /env/lib/python3.7/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
2018-12-11 02:51:37 default[20181210t140744] """)
2018-12-11 02:51:38 default[20181210t211942] usage: gunicorn [-h] [--debug] [--args]
2018-12-11 02:51:38 default[20181210t211942] gunicorn: error: unrecognized arguments: main:app --workers 1 -c /config/gunicorn.py
2018-12-11 02:51:38 default[20181210t211942] [2018-12-11 02:51:38 +0000] [882] [INFO] Worker exiting (pid: 882)
By default, if entrypoint is not defined in app.yaml, App Engine will look for an app called app in main.py. If you look at the official code sample in Github, it addresses it in the main.py file by declaring:
app = Flask(__name__)
Alternatively, this can be configured by adding an entrypoint to app.yaml pointing to another file. For example, if you declare app in a file called prod:
entrypoint: gunicorn -b :$PORT prod.app
You'll find additional details about the entrypoint configuration here.
From the comment and answer above, It looked like the problem had to be in my app. I confirmed that by running gunicorn locally on a bare-bones app without difficulty, but failing when running the real app.
One culprit turned out to be the use of argparse, which I was using so I could add a "debug" argument to the command line when working locally. Moving that code to the if __name__ == '__main__': section made the app run fine when using gunicorn locally.
But it still fails when deployed to GAE:
2018-12-12 20:09:16 default[20181212t094625] [2018-12-12 20:09:16 +0000] [8145] [INFO] Booting worker with pid: 8145
2018-12-12 20:09:16 default[20181212t094625] /env/lib/python3.7/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
2018-12-12 20:09:16 default[20181212t094625] """)
2018-12-12 20:09:16 default[20181212t094625] [2018-12-12 20:09:16 +0000] [8286] [INFO] Booting worker with pid: 8286
2018-12-12 20:09:16 default[20181212t094625] /env/lib/python3.7/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
2018-12-12 20:09:16 default[20181212t094625] """)
2018-12-12 20:09:19 default[20181212t094625] usage: gunicorn [-h] [--debug] [--args]
2018-12-12 20:09:19 default[20181212t094625] gunicorn: error: unrecognized arguments: -b :8081 main:app
2018-12-12 20:09:19 default[20181212t094625] [2018-12-12 20:09:19 +0000] [8145] [INFO] Worker exiting (pid: 8145)
Here’s app.yaml (minus the environment variable for SendGrid):
runtime: python37
env: standard
#threadsafe: true
entrypoint: gunicorn --workers 2 --bind :5000 main:app
#runtime_config:
# python_version: 3
# This beta setting is necessary for the db hostname parameter to be able to handle a URI in the
# form “/cloudsql/...” where ... is the instance given here:
beta_settings:
cloud_sql_instances: provost-access-148820:us-east1:cuny-courses

Couldn't connect to the Docker daemon due to an SSL

I'm trying to deploy Managed VM (Python) on Google App / Compute Engine with command:
gcloud --verbosity debug preview app deploy ./app.yaml --set-default
during deployment VM instance is created but it exits on error (here is paste of last few lines of listing):
DEBUG: Display disabled.
Copying certificates for secure access. You may be prompted to create an SSH keypair.
DEBUG: Loaded Command Group: ['gcloud', 'compute', 'copy_files']
DEBUG: Detected docker environment variables: DOCKER_HOST=tcp://104.197.50.238:2376, DOCKER_CERT_PATH=../../../../../tmp/tmpPbKmOs, DOCKER_TLS_VERIFY=True
INFO: Starting new HTTPS connection (1): 104.197.50.238
DEBUG: Failed to connect to Docker daemon due to an SSL problem: [Errno 1] _ssl.c:523: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
DEBUG: (gcloud.preview.app.deploy) Couldn't connect to the Docker daemon due to an SSL problem.
Traceback (most recent call last):
File "/home/zdenulo/bin/google-cloud-sdk/./lib/googlecloudsdk/calliope/cli.py", line 591, in Execute
result = args.cmd_func(cli=self, args=args)
File "/home/zdenulo/bin/google-cloud-sdk/./lib/googlecloudsdk/calliope/backend.py", line 1191, in Run
resources = command_instance.Run(args)
File "/home/zdenulo/bin/google-cloud-sdk/./lib/googlecloudsdk/appengine/app_commands/deploy.py", line 208, in Run
implicit_remote_build)
File "/home/zdenulo/bin/google-cloud-sdk/./lib/googlecloudsdk/appengine/lib/deploy_command_util.py", line 137, in BuildAndPushDockerImages
with docker_util.DockerHost(cli, version_id, remote) as docker_client:
File "/home/zdenulo/bin/google-cloud-sdk/./lib/googlecloudsdk/appengine/lib/images/docker_util.py", line 215, in __enter__
return containers.NewDockerClient(local=(not self._remote), **kwargs)
File "/home/zdenulo/bin/google-cloud-sdk/./lib/googlecloudsdk/appengine/lib/docker/containers.py", line 313, in NewDockerClient
'Couldn\'t connect to the Docker daemon due to an SSL problem.' + msg)
DockerDaemonConnectionError: Couldn't connect to the Docker daemon due to an SSL problem.
ERROR: (gcloud.preview.app.deploy) Couldn't connect to the Docker daemon due to an SSL problem.
apparently there is problem with SSL but I have no idea how to solve it, and I'm quite desperate at the moment :)
I have:
Docker version 1.8.2, build 0a8c2e3
Boot2Docker-cli version: v1.8.0 Git commit: 9a26066
Google Cloud SDK 0.9.79
app 2015.09.23
app-engine-java 1.9.26
app-engine-python 1.9.26
bq 2.0.18
bq-nix 2.0.18
core 2015.09.23
core-nix 2015.09.03
gcloud 2015.09.21
gsutil 4.15
gsutil-nix 4.14
preview 2015.09.21
OpenSuse 13.2
OpenSSL 1.0.1k-fips 8 Jan 2015
I would very much appreciate help of any kind.
EDIT:
app.yaml
module: default
runtime: python27
api_version: 1
threadsafe: yes
vm: true
resources:
cpu: .5
memory_gb: 1.3
manual_scaling:
instances: 1
handlers:
- url: .*
script: main.app
Are you using homebrew Python on OS X? If so, there's an existing bug for OpenSSL and Docker here.
The easiest way around this is to temporary use a virtualenv with system python.
pip install virtualenv
virtualenv ~/system-python-env
source ~/system-python-env
gcloud preview app deploy ...
You can get at a short-lived SSL with "gcloud docker --authorize-only". Then immediately do your "gcloud preview app deploy..".
gcloud docker --authorize-only
gcloud preview app deploy app.yaml --promote

Resources