I am developing a python web server in Google App Engine.
I want to debug it in VScode so I want to get the Dockerfile for the latest python 3 version in the gcr.io/google-appengine/python
Where do I get it?
Here is the Dockerfile you can use:
FROM gcr.io/google-appengine/python
# Create a virtualenv for dependencies. This isolates these packages from
# system-level packages.
# Use -p python3 or -p python3.7 to select python version. Default is version 2.
RUN virtualenv /env
# Setting these environment variables are the same as running
# source /env/bin/activate.
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
# Copy the application's requirements.txt and run pip to install all
# dependencies into the virtualenv.
ADD requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
# Add the application source code.
ADD . /app
WORKDIR /app
# Run a WSGI server to serve the application. gunicorn must be declared as
# a dependency in requirements.txt.
ENTRYPOINT ["gunicorn", "-b", ":8080", "server:app"]
You can also look at the Github Repository
This is the github repo of the Python Runtime for App Engine Flex, in that repository you can find the Dockerfile and all the Scripts to create an Docker container similar than the used on App Engine Flex
# The Google App Engine base image is debian (jessie) with ca-certificates
# installed.
# Source: https://github.com/GoogleCloudPlatform/debian-docker
FROM ${OS_BASE_IMAGE}
ADD resources /resources
ADD scripts /scripts
# Install Python, pip, and C dev libraries necessary to compile the most popular
# Python libraries.
RUN /scripts/install-apt-packages.sh
# Setup locale. This prevents Python 3 IO encoding issues.
ENV LANG C.UTF-8
# Make stdout/stderr unbuffered. This prevents delay between output and cloud
# logging collection.
ENV PYTHONUNBUFFERED 1
RUN wget https://storage.googleapis.com/python-interpreters/latest/interpreter-3.4.tar.gz && \
wget https://storage.googleapis.com/python-interpreters/latest/interpreter-3.5.tar.gz && \
wget https://storage.googleapis.com/python-interpreters/latest/interpreter-3.6.tar.gz && \
wget https://storage.googleapis.com/python-interpreters/latest/interpreter-3.7.tar.gz && \
tar -xzf interpreter-3.4.tar.gz && \
tar -xzf interpreter-3.5.tar.gz && \
tar -xzf interpreter-3.6.tar.gz && \
tar -xzf interpreter-3.7.tar.gz && \
rm interpreter-*.tar.gz
# Add Google-built interpreters to the path
ENV PATH /opt/python3.7/bin:/opt/python3.6/bin:/opt/python3.5/bin:/opt/python3.4/bin:$PATH
RUN update-alternatives --install /usr/local/bin/python3 python3 /opt/python3.7/bin/python3.7 50 && \
update-alternatives --install /usr/local/bin/pip3 pip3 /opt/python3.7/bin/pip3.7 50
# Upgrade pip (debian package version tends to run a few version behind) and
# install virtualenv system-wide.
RUN /usr/bin/pip install --upgrade -r /resources/requirements.txt && \
/opt/python3.4/bin/pip3.4 install --upgrade -r /resources/requirements.txt && \
rm -f /opt/python3.4/bin/pip /opt/python3.4/bin/pip3 && \
/opt/python3.5/bin/pip3.5 install --upgrade -r /resources/requirements.txt && \
rm -f /opt/python3.5/bin/pip /opt/python3.5/bin/pip3 && \
/opt/python3.6/bin/pip3.6 install --upgrade -r /resources/requirements.txt && \
rm -f /opt/python3.6/bin/pip /opt/python3.6/bin/pip3 && \
/opt/python3.7/bin/pip3.7 install --upgrade -r /resources/requirements.txt && \
rm -f /opt/python3.7/bin/pip /opt/python3.7/bin/pip3 && \
/usr/bin/pip install --upgrade -r /resources/requirements-virtualenv.txt
# Setup the app working directory
RUN ln -s /home/vmagent/app /app
WORKDIR /app
# Port 8080 is the port used by Google App Engine for serving HTTP traffic.
EXPOSE 8080
ENV PORT 8080
# The user's Dockerfile must specify an entrypoint with ENTRYPOINT or CMD.
CMD []
Related
I have create a Laravel Project & React Scaffolding, it means React as UI.
Before this project, i have use both Libraries as two different, for example App-frontend(React) and App-backend(Laravel).
But now i have create only one App and it will run on one Container. My Container what i have used before for the App-backend is this one;
FROM php:8.0.10-fpm-alpine
LABEL Maintainer="me" \
Description="Docker container with Nginx 1.15 & PHP-FPM 7.4 based on Alpine Linux. Nginx Confg Ready for Laravel/Lumen"
ENV BUILD_DEPS \
cmake \
autoconf \
g++ \
gcc \
make \
pcre-dev \
gmp-dev \
zip \
libzip-dev \
imagemagick-dev
RUN apk update && apk add --no-cache --virtual .build-deps $BUILD_DEPS $PHPIZE_DEPS
RUN set -ex && apk --no-cache add sudo
RUN apk --no-cache add nginx supervisor curl postgresql-dev libuv-dev openldap-dev ssmtp libxml2-dev
RUN docker-php-ext-install mysqli pdo_mysql pgsql pdo_pgsql
RUN apk add --no-cache libpng libpng-dev && docker-php-ext-install gd && apk del libpng-dev
RUN docker-php-ext-configure zip
RUN docker-php-ext-install zip
#Install Locales
ENV MUSL_LOCALE_DEPS musl-dev gettext-dev libintl
ENV MUSL_LOCPATH /usr/share/i18n/locales/musl
RUN apk add --no-cache \
$MUSL_LOCALE_DEPS \
&& wget https://gitlab.com/rilian-la-te/musl-locales/-/archive/master/musl-locales-master.zip \
&& unzip musl-locales-master.zip \
&& cd musl-locales-master \
&& cmake -DLOCALE_PROFILE=OFF -D CMAKE_INSTALL_PREFIX:PATH=/usr . && make && make install \
&& cd .. && rm -r musl-locales-master
# Add Envsubst
ENV TZ="Europe/Berlin" \
RUNTIME_DEPS="libintl"
RUN apk add --update $RUNTIME_DEPS \
&& apk add --virtual build_deps gettext \
&& cp /usr/bin/envsubst /usr/local/bin/envsubs \
# Add default timezone
&& apk add tzdata \
&& cp /usr/share/zoneinfo/${TZ} /etc/localtime \
&& echo "${TZ}" > /etc/timezone
# Configure nginx
COPY .deploy/nginx.conf /nginx.conf
# Configure PHP-FPM
COPY .deploy/fpm-pool.conf /usr/local/etc/php-fpm.d/zzz_custom.conf
COPY .deploy/php.ini /usr/local/etc/php/conf.d/laravel_custom.ini
# Configure supervisord
COPY .deploy/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
# Configure ssmtp config
COPY .deploy/ssmtp.conf /etc/ssmtp/ssmtp.conf
# Configure Cron Job for Scheduler
#COPY .deploy/scheduler /var/www/html/scheduler
#RUN chmod +x /var/www/html/scheduler
COPY .deploy/crontab /etc/crontabs/root
RUN chmod 0644 /etc/crontabs/root
# Add application
WORKDIR /var/www/html
COPY src/. /var/www/html/
RUN chown -R www-data:www-data /var/www/html/storage/
RUN chown -R www-data:www-data /var/www/html/bootstrap/cache
RUN sudo chmod -R 777 /var/www/html/storage/
RUN sudo chmod -R 777 /var/www/html/bootstrap/cache
# RUN php artisan cache:clear
# RUN php artisan config:cache && php artisan view:cache
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
Now i need help to add the Docker command for the React to run and copy in the src where it run. Can you help me please?
I have some project in docker. When i recreating docker app, docker still deleting old databases in localhost. I did not find any solution on internet. Is there someone who knows how this problem solved?
Thanks for the responding
There is my docker file
FROM php:7.2-apache
ENV DOCKER=1
ENV MASTER_URL_DOCKERFILE='http://website/'
RUN docker-php-ext-install mysqli pdo_mysql
RUN apt-get update -y && apt-get install -y \
libpng-dev \
libwebp-dev \
libjpeg62-turbo-dev \
libpng-dev libxpm-dev \
libfreetype6-dev
RUN docker-php-ext-configure gd \
--with-gd \
--with-webp-dir \
--with-jpeg-dir \
--with-png-dir \
--with-zlib-dir \
--with-xpm-dir \
--with-freetype-dir
RUN docker-php-ext-install gd
RUN docker-php-ext-install calendar && docker-php-ext-configure calendar
RUN a2enmod rewrite
RUN ln -s /etc/apache2/mods-available/expires.load /etc/apache2/mods-enabled/
COPY core /var/www/core/
COPY chainway/src /var/www/html/
COPY chainway/docker/app/ /usr/local/bin/
RUN service apache2 restart
And there is how i running containers
#!/bin/bash
DIR=$(dirname $0)
cd $DIR
wget –V
wget -O "$DIR/docker/db/dump.sql" "http://website/senddatabasetolocalhost.php?auth=authkey"
docker-compose stop
docker-compose build
docker-compose up -d
You will have to use a volumes in docker-compose.yml like this :
volumes:
- $PWD/my_sql:/var/lib/mysql
You can store your db data using volumes.
Add to your docker-compose.yml file in mysql section:
mysql:
volumes:
- db_data:/var/lib/mysql
And to the end of the file:
volumes:
db_data:
I have the React app. with 3 versions: for developement, testing and production.
They only differ in the URL that is used for the login (different WordPress site).
How do I make the react app agnostic/configurable at runtime
and save the need to generate 3 versions?
Just use
window.location.host // need to add http/s
to get the URL.
Many other parameters can be found using: URLSearchParams, see URLSearchParams
For those that use a Docker container, it can be done with environment variables.
My situation:
I made my react app in Visual Studio with template 'ASP.NET Core with React.js and Redux'. It is placed in a docker container which is deployed in Kubernetes.
It took me almost half a day but I managed to do it :)
First I found this post and especially the comment from Patrick Lee Scott is interesting:
https://levelup.gitconnected.com/handling-multiple-environments-in-react-with-docker-543762989783
Comment from Patrick Lee Scott:
https://patrickleet.medium.com/another-option-build-with-dummy-values-like-replace-api-url-and-then-use-an-entrypoint-sh-db053a799167
The comment is a good start but doesn't show the complete solution.
First I tested the script (and try to figure out what it was doing).
During the testing I found out that the 'cat /proc/self/environ' was not working, I replaced it with xargs -0 -L1 -a /proc/self/environ.
Second I had trouble getting the script to run via ENTRYPOINT, I figured out that the script needed to begin with: #!/bin/bash
Third, I added the original ENTRYPOINT at the bottom of the script.
Here is the modified script of Patrick Lee Scott:
appEntryPoint.sh:
#!/bin/bash
echo "Inserting env variables"
for file in ./ClientApp/build/static/js/*.js
do
echo "env sub for $file"
list="$(xargs -0 -L1 -a /proc/self/environ | awk -F= '{print $1}')"
echo "$list" | while read -r line; do
export REPLACE="REPLACE_$line"
export VALUE=$(eval "echo \"\$$line\"")
#echo "replacing ${REPLACE} with ${VALUE} in $file"
sed -i "s~${REPLACE}~${VALUE}~g" $file
unset REPLACE
unset VALUE
done
done
dotnet My.DotNet.ReactApp.dll
To make the answer complete, I will list here my Dockerfile:
Dockerfile:
FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app/ClientApp
EXPOSE 80
EXPOSE 443
RUN echo "Acquire::Check-Valid-Until \"false\";\nAcquire::Check-Date \"false\";" | cat > /etc/apt/apt.conf.d/10no--check-valid-until && apt-get update -yq \
&& apt-get install -y curl \
&& apt-get install -y libpng-dev libjpeg-dev curl libxi6 build-essential libgl1-mesa-glx \
&& curl -sL https://deb.nodesource.com/setup_lts.x | bash - \
&& apt-get install -y nodejs
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
RUN echo "Acquire::Check-Valid-Until \"false\";\nAcquire::Check-Date \"false\";" | cat > /etc/apt/apt.conf.d/10no--check-valid-until && apt-get update -yq \
&& apt-get install -y curl \
&& apt-get install -y libpng-dev libjpeg-dev curl libxi6 build-essential libgl1-mesa-glx \
&& curl -sL https://deb.nodesource.com/setup_lts.x | bash - \
&& apt-get install -y nodejs
WORKDIR /app/ClientApp
COPY /My.DotNet.ReactApp/ClientApp/package*.json ./
RUN npm install --silent
COPY /My.DotNet.ReactApp/ClientApp ./
RUN npm run build
WORKDIR /app/publish/ClientApp
RUN cp -r /app/ClientApp/build .
WORKDIR /app
COPY /My.DotNet.ReactApp ./
RUN dotnet restore "My.DotNet.ReactApp.csproj"
RUN dotnet build "My.DotNet.ReactApp.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "My.DotNet.ReactApp.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
COPY ./appEntryPoint.sh ./
RUN chmod +x appEntryPoint.sh
ENTRYPOINT ["/app/appEntryPoint.sh"]
What you now have to do is put in your .env file placeholders:
.env.production
REACT_APP_API_ENDPOINT=REPLACE_REACT_APP_API_ENDPOINT
REACT_APP_API_SOME_OTHER_URL=REPLACE_REACT_APP_API_SOME_OTHER_URL
Now you can set the real values for the react variables as environment variables on the container, the script reads the environment variables from the container and will replace all values that begin with "REPLACE_"
So in this case we need to set these environment variables on the container used for production:
REACT_APP_API_ENDPOINT=https://prod.endpoint.com
REACT_APP_API_SOME_OTHER_URL=https://prod.url.com
And for the test environment:
REACT_APP_API_ENDPOINT=https://test.endpoint.com
REACT_APP_API_SOME_OTHER_URL=https://test.url.com
Use .env file. Check out this link for installation. At the end you will have such kind of structure in you app folder
I'm able to expose 2 services locally, on my computer with the following command docker run -d --name #containerName -e "PORT=8080" -p 8080:8080 -p 5000:5000 #imageName.
On port 5000, I expose my backend using a flask-restful api and on port 8080, I expose my frontend using nginx to serve my react.js application.
When I deploy that on Heroku platform I have 2 problems :
It seems Heroku try to bind Nginx on port 80 but the PORT env var is different, log's output :
Starting process with command /bin/sh -c gunicorn\ --bind\ 0.0.0.0:5000\ app:app\ --daemon\ \&\&\ sed\ -i\ -e\ \'s/\34352/\'\"\34352\"\'/g\'\ /etc/nginx/conf.d/default.conf\ \&\&\ nginx\ -g\ \'daemon\ off\;\'
nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:1
[emerg] bind() to 0.0.0.0:80 failed (13: Permission denied)
How can I write the -p 8080:8080 -p 5000:5000 part inside the Dockerfile or hack around it since I can't specify the docker run [...] command on Heroku ?
I'm new to Docker and Nginx, so I would be very grateful if you know a better way to achieve my goal. Thanks in advance.
# ------------------------------------------------------------------------------
# Temporary image for react.js app using a multi-stage build
# ref : https://docs.docker.com/develop/develop-images/multistage-build/
# ------------------------------------------------------------------------------
FROM node:latest as build-react
# create a shared folder and define it as current dir
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
# copy the files required for node packages installation
COPY ./react-client/package.json ./
COPY ./react-client/yarn.lock ./
# install dependencies, copy code and build bundles
RUN yarn install
COPY ./react-client .
RUN yarn build
# ------------------------------------------------------------------------------
# Production image based on ubuntu:latest with nginx & python3
# ------------------------------------------------------------------------------
FROM ubuntu:latest as prod-react
WORKDIR /usr/src/app
# update, upgrade and install packages
RUN apt-get update && apt-get install -y --no-install-recommends apt-utils
RUN apt-get upgrade -y
RUN apt-get install -y nginx curl python3 python3-distutils python3-apt
# install pip
RUN curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
RUN python3 get-pip.py
# copy flask-api requirements file and install modules
COPY ./flask-api/requirements.txt ./
RUN pip install -r requirements.txt
RUN pip install gunicorn
# copy flask code
COPY ./flask-api/app.py .
# copy built image and config onto nginx directory
COPY --from=build-react /usr/src/app/build /usr/share/nginx/html
COPY ./conf.nginx /etc/nginx/conf.d/default.conf
# ------------------------------------------------------------------------------
# Serve flask-api with gunicorn and react-client with nginx
# Ports :
# - 5000 is used for flask-api
# - 8080 is used by nginx to serve react-client
# You can change them but you'll have to change :
# - for flask-api : conf.nginx, axios calls (5000 -> #newApiPort)
# - for react-client : CORS origins (8080 -> #newClientPort)
#
# To build and run this :
# docker build -t #imageName .
# docker run -d --name #containerName -e "PORT=8080" -p 8080:8080 -p 5000:5000 #imageName
# ------------------------------------------------------------------------------
CMD gunicorn --bind 0.0.0.0:5000 app:app --daemon && \
sed -i -e 's/$PORT/'"$PORT"'/g' /etc/nginx/conf.d/default.conf && \
nginx -g 'daemon off;'
I am building my spring boot application using maven and google cloud build but somehow I get different deployment results whether I run locally using mvn appengine:run or that I deploy using Cloud Build.
If I run locally using mvn appengine:run, I can access my controller as expected. Using Cloud Build, I get a 404 error.
My cloudbuild.yaml is the following:
steps:
- name: 'gcr.io/cloud-builders/mvn'
args: ['package']
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy', 'target/myapp/WEB-INF/appengine-web.xml']
How would you recommend configuring a cloud build in order to build and deploy a spring boot application on google app engine?
After additionnal digging, the issue seems to be related to some kind of error returned:
javax.servlet.ServletContext log: 2 Spring WebApplicationInitializers detected on classpath
I do not get this message in the stack trace when deploying from local machine using mvn appengine:deploy
My question still remains, how do I go about creating a cloudbuild.yaml that can invoke mvn appengine:deploy ?
In order to build a spring boot project and deploy it to google appengine using Google Cloud Build. I ended up having to first build a "builder" image using the image below and reference this image when performing my actual application builds.
Dockerfile
FROM debian:stretch
#
# Google Cloud SDK installation
# https://cloud.google.com/sdk/docs/quickstart-debian-ubuntu
RUN apt-get update -y && \
apt-get install \
apt-utils \
dialog \
gnupg \
lsb-release \
curl -y && \
export CLOUD_SDK_REPO="cloud-sdk-$(lsb_release -c -s)" && \
echo "deb http://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && \
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && \
apt-get update -y && \
apt-get install google-cloud-sdk -y
# Install all available components
RUN apt-get install google-cloud-sdk \
google-cloud-sdk \
google-cloud-sdk-app-engine-go \
google-cloud-sdk-app-engine-java \
google-cloud-sdk-app-engine-python \
google-cloud-sdk-app-engine-python-extras \
google-cloud-sdk-bigtable-emulator \
google-cloud-sdk-cbt \
google-cloud-sdk-datastore-emulator \
google-cloud-sdk-cloud-build-local \
google-cloud-sdk-datalab \
kubectl \
google-cloud-sdk-pubsub-emulator -y
#
# OpenJDK installation
# https://linuxhint.com/install-openjdk-8-on-debian-9-stretch/
RUN apt-get install openjdk-8-jdk -y
#
# MAVEN installation
# https://github.com/carlossg/docker-maven/blob/f581ea002e5d067deb6213c00a4d217297cad469/jdk-8/Dockerfile
ARG MAVEN_VERSION=3.5.4
ARG USER_HOME_DIR="/root"
ARG SHA=ce50b1c91364cb77efe3776f756a6d92b76d9038b0a0782f7d53acf1e997a14d
ARG BASE_URL=https://apache.osuosl.org/maven/maven-3/${MAVEN_VERSION}/binaries
RUN mkdir -p /usr/share/maven /usr/share/maven/ref \
&& curl -fsSL -o /tmp/apache-maven.tar.gz ${BASE_URL}/apache-maven-${MAVEN_VERSION}-bin.tar.gz \
&& echo "${SHA} /tmp/apache-maven.tar.gz" | sha256sum -c - \
&& tar -xzf /tmp/apache-maven.tar.gz -C /usr/share/maven --strip-components=1 \
&& rm -f /tmp/apache-maven.tar.gz \
&& ln -s /usr/share/maven/bin/mvn /usr/bin/mvn
ENV MAVEN_HOME /usr/share/maven
ENV MAVEN_CONFIG "$USER_HOME_DIR/.m2"
WORKDIR /workspace
cloudbuild.yaml
# In this directory, run the following command to build this builder.
# $ gcloud builds submit . --config=cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '--tag=gcr.io/$PROJECT_ID/gcloud-maven', '.']
# Simple sanity check: invoke java to confirm that it was installed correctly.
- name: 'gcr.io/$PROJECT_ID/gcloud-maven'
args: ['java', '-version']
# Simple sanity check: invoke gcloud to confirm that it was installed correctly.
- name: 'gcr.io/$PROJECT_ID/gcloud-maven'
args: ['gcloud', 'projects', 'list']
# Simple sanity check: invoke maven to confirm that it was installed correctly.
- name: 'gcr.io/$PROJECT_ID/gcloud-maven'
args: ['mvn', '--version']
images: ['gcr.io/$PROJECT_ID/gcloud-maven']
timeout: 1200s
My spring boot project's cloudbuild.yaml now references this image:
steps:
- name: 'gcr.io/$PROJECT_ID/gcloud-maven'
args: ['mvn', 'appengine:deploy']
I will try to put this docker image on dockerhub and github for others to find. I will also appreciate people more familiar with docker and linux to help improve this image to reduce its size. (For example, use Alpine instead of Debian or Debian Stretch Slim). In the meantime, I hope this helps others like me.