Why app engine flex deployment gives no logs or files? - google-app-engine

I've switched deployment config (app.yaml) from standard environment to flex with Dockerfile.
runtime: custom
env: flex
service: my-backend-service
When gitlab CI executes app deploy app.yaml, I see a successful build and deployment log. New service and version are being created in the App engine dashboard, but there're no logs/files in Debug mode and instance is not responding. Debugger gives an error "The debugger could not find a debug target for the application".
Final part of CI-CD log.
You can stream logs from the command line by running:
$ gcloud app logs tail -s my-backend-service
To view your application in the web browser run:
$ gcloud app browse -s my-backend-service --project=some-project
Job succeeded
Dockerfile (works well with Cloud run deployment)
FROM python:3.10-slim-bullseye
ENV APP_HOME /app
ENV PYTHONPATH=${PYTHONPATH}:${PWD}
RUN apt-get update \
&& apt-get -y install libpq-dev gcc \
&& pip install psycopg2
RUN mkdir /app
COPY pyproject.toml /app
WORKDIR $APP_HOME
# set env variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
COPY . ./
RUN pip3 install poetry
RUN poetry config virtualenvs.create false
RUN poetry install --no-dev
CMD exec gunicorn main:app -c gunicorn_config.py
There were no such issues with standard app.yaml environment (python). What I'm missing?

Related

Docker image wouldn't start React app in Google Compute engine

Backstory: So I have copied the files to the Google Compute Engine VM and am trying to run docker-compose up --build, which in theory should start 3 different instances: rasa-ui, rasa-api and rasa-actions. However, it does not start the one, which has react app on it (rasa-ui). I figured docker file might be the problem.
I tried running only docker image for react app (rasa-ui) and it doesn't start the app at all.
On my local machine code runs and starts the app fine.
Commands I used (and that worked) for docker images locally were:
docker build -t rasa-ui -f Dockerfile
docker run -p 3000:3000 rasa-ui
When I use the same commands in VM, it builds the image, but doesn't run it (and doesn't show any errors, when I run it)
Docker file:
FROM node:alpine
WORKDIR /app
COPY package.json ./
COPY package-lock.json ./
COPY ./ ./
RUN npm i
CMD ["npm", "run", "start"]
Any suggestions?

React App as a Django App in a Docker Container - connection refused when trying to access APIs on localhost:8000 urls

hope you might have some guidance for me on this.
Right now I have a React app that is part of a Django app (for the sake of ease of passing auth login tokens), which is now containerised in a single Dockerfile. Everything works as intended when it is run as a Docker instance locally, but the Docker Image is having issues, despite the fact that the webpages are visible when the Image is deployed on server.
Specifically, when the Docker image is accessed, the home page renders as expected, but then a number of fetch requests which usually go to localhost:8000/<path>/<to>/<url> return the following error:
GET http://localhost:8000/<path>/<to>/<url> net::ERR_CONNECTION_REFUSED
On a colleague's suggestion, I have tried changing localhost:8000 to the public IP address of the server the Docker Image is hosted on (eg 172.XX.XX.XXX:8000) but when I rebuild the React app, these changes do not remain, and it defaults back to localhost. Here are my questions:
Is this something I change from within the React application itself? Do I need manually assign an IP address? (This seems unlikely to me)
Or is this something to do with either the Django port settings, or the Dockerfile itself?
Here is the Dockerfile
FROM ubuntu:18.04
# ...
RUN apt-get update && apt-get install -y \
software-properties-common
RUN add-apt-repository ppa:deadsnakes/ppa
RUN apt-get update && apt-get install -y \
python3.7 \
python3-pip
RUN python3.7 -m pip install pip
RUN apt-get update && apt-get install -y \
python3-distutils \
python3-setuptools
RUN python3.7 -m pip install pip --upgrade pip
# ???
ENV PYTHONUNBUFFERD 1
# copy file form local machine to container
COPY ./requirement.txt /requirement.txt
# install dependency
# RUN pip install -r /requirement.txt
RUN pip install -r /requirement.txt
# create app folder in container
RUN mkdir /app
# set default working dictionary
WORKDIR /app
# copy local app folder to container folder
COPY ./app /app
CMD ["python", "test.py"]
Multiple technologies, multiple failure points - thanks in advance!

Issue dockerizing a React + Node + nginx app

I'm trying to build an image for my React app. It's a pretty simply create-react-app setup. I'm aware that there are many questions regarding this topic, but the distinction here is that I am trying to deploy to Heroku and, because of Heroku not supporting EXPOSE, the setup is a little different.
I've managed to get my frontend up and running, but I'm having issues with my Express portion. Here is my Dockerfile.
FROM node:14.1-alpine AS builder
WORKDIR /opt/web
COPY package.json ./
RUN npm install
ENV PATH="./node_modules/.bin:$PATH"
COPY . ./
RUN npm run build
FROM nginx:1.17-alpine
RUN apk --no-cache add curl
RUN curl -L https://github.com/a8m/envsubst/releases/download/v1.1.0/envsubst-`uname -s`-`uname -m` -o envsubst && \
chmod +x envsubst && \
mv envsubst /usr/local/bin
COPY ./nginx/nginx.conf /etc/nginx/nginx.template
CMD ["/bin/sh", "-c", "envsubst < /etc/nginx/nginx.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"]
COPY --from=builder /opt/web/build /usr/share/nginx/html
It's pretty straightforward, but I'm not sure how to serve my server.js file up as an API.
I've tried many online tutorials to get nginx up and running with React and Express, but it either doesn't work with my current setup (locally) or it fails building on Heroku.
I've created a reproducible repo here. Not sure where to go from here.

gcloud app deploy stuck at Updating service [default]...failed. Application startup error ...Did you mean to run dotnet SDK commands?

I am trying to deploy .net core app on google app engine. Remote build output gets stuck at
Updating service [default]...failed. ERROR: (gcloud.app.deploy) Error Response: [13] Timed out when starting VMs. It's possible that the application code is unhealthy. (0/2 ready, 2 still deploying).
after 5 mins or so. This is the first time deployment.
While checking in the logs found one line stated as
Application startup error ...Did you mean to run dotnet SDK commands? Please install dotnet SDK from ...link.
Earlier Dockerfile had the following configuration
FROM microsoft/dotnet:1.1.0-runtime
COPY . /app
WORKDIR /app
EXPOSE 8080/tcp
ENV ASPNETCORE_URLS http://*:8080
ENTRYPOINT ["dotnet", "actualdllname.dll"].
Based on this changed Dockerfile to add following additional lines:
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
curl \
&& rm -rf /var/lib/apt/lists/*
# Install .NET Core
ENV DOTNET_VERSION 1.1.0
ENV DOTNET_DOWNLOAD_URL https://dotnetcli.blob.core.windows.net/dotnet/release/1.1.0/Binaries/$DOTNET_VERSION/dotnet-debian-x64.$DOTNET_VERSION.tar.gz
RUN curl -SL $DOTNET_DOWNLOAD_URL --output dotnet.tar.gz \
&& mkdir -p /usr/share/dotnet \
&& tar -zxf dotnet.tar.gz -C /usr/share/dotnet \
&& rm dotnet.tar.gz \
&& ln -sfn /usr/share/dotnet/dotnet /usr/bin/dotnet
This script didn't give any error during gcloud app deploy.
Still not able to resolve error.
Easy deployment steps are provided in the "Deploy an ASP.NET Core app to App Engine" tutorial. It includes a recommended procedure for installing the dotnet SDK.

Reusable docker image for AngularJS

We have an AngularJS application. We wrote a dockerfile for it so it's reusable on every system. The dockerfile isn't a best practice and it's maybe some weird build up (build and hosting in same file) for some but it's just created to run our angularjs app locally on each PC of every developer.
Dockerfile:
FROM nginx:1.10
... Steps to install nodejs-legacy + npm
RUN npm install -g gulp
RUN npm install
RUN gulp build
.. steps to move dist folder
We build our image with docker build -t myapp:latest .
Every developer is able to run our app with docker run -d -p 80:80 myapp:latest
But now we're developing other backends. So we have a backend in DEV, a backend in UAT, ...
So there are different URLS which we need to use in /config/xx.json
{
...
"service_base": "https://backend.test.xxx/",
...
}
We don't want to change that URL every time, rebuild the image and start it. We also don't want to declare some URLS (dev, uat, prod, ..) which can be used there. We want to perform our gulp build process with an environment variable instead of a hardcoded URL.
So we we can start our container like this:
docker run -d -p 80:80 --env URL=https://mybackendurl.com app:latest
Is there someone who has experience with this kind of issues? So we'll need an env variable in our json and building it and add the URL later on if that's possible.
EDIT : Better option is to use build args
Instead of passing URL at docker run command, you can use docker build args. It is better to have build related commands to be executed during docker build than docker run.
In your Dockerfile,
ARG URL
And then run
docker build --build-arg URL=<my-url> .
See this stackoverflow question for details
This was my 'solution'. I know it isn't the best docker approach but just for our developers it was a big help.
My dockerfile looks like this:
FROM nginx:1.10
RUN apt-get update && \
apt-get install -y curl
RUN sed -i "s/httpredir.debian.org/`curl -s -D - http://httpredir.debian.org/demo/debian/ | awk '/^Link:/ { print $2 }' | sed -e 's#<http://\(.*\)/debian/>;#\1#g'`/" /etc/apt/sources.list
RUN \
apt-get clean && \
apt-get update && \
apt-get install -y nodejs-legacy && \
apt-get install -y npm
WORKDIR /home/app
COPY . /home/app
RUN npm install -g gulp
RUN npm install
COPY start.sh /
CMD ["./start.sh"]
So after the whole include of the app + npm installation inside my nginx I start my container with the start.sh script.
The content of start.sh:
#!/bin/bash
sed -i 's#my-url#'"$DATA_ACCESS_URL"'#' configs/config.json
gulp build
rm -r /usr/share/nginx/html/
//cp right folders which are created by gulp build to /usr/share/nginx/html
...
//start nginx container
/usr/sbin/nginx -g "daemon off;"
So the build will happen when my container starts. Not the best way of course but it's all for the needs of the developers. Have an easy local frontend.
The sed command will perform a replace on the config file which contains something like:
{
"service_base": "my-url",
}
So my-url will be replaced by my the content of my environment variable which I willd define in my docker run command.
Than I'm able to perform.
docker run -d -p 80:80 -e DATA_ACCESS_URL=https://mybackendurl.com app:latest
And every developer can use the frontend locally and connect with their own backend URL.

Resources