Weasyprint Dockerfile for GAE - google-app-engine

I am trying to install weasyprint on gae, i know we can install external libraries by passing it in a Dockerfile by changing the runtime from python to custom in app.yaml. I am having trouble creating the Dockerfile for weasyprint libraries.

Here is a simple example that I wrote following these instructions. I have tested it and the deployment on GAE has been successful for me:
Dockerfile
FROM gcr.io/google-appengine/python
# Create a virtualenv for dependencies. This isolates these packages from
# system-level packages.
# Use -p python3 or -p python3.7 to select python version. Default is version 2.
RUN virtualenv /env -p python3.7
# Setting these environment variables are the same as running
# source /env/bin/activate.
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
# Install platform's packages required for WeasyPrint
RUN apt-get update && apt-get -y install build-essential python3-dev python3-pip \
python3-setuptools python3-wheel python3-cffi \
libcairo2 libpango-1.0-0 libpangocairo-1.0-0 libgdk-pixbuf2.0-0 libffi-dev shared-mime-info
# Copy the application's requirements.txt and run pip to install all
# dependencies into the virtualenv.
ADD requirements.txt requirements.txt
RUN pip install -r requirements.txt
# Add the application source code.
ADD . /app
# Run a WSGI server to serve the application. gunicorn must be declared as
# a dependency in requirements.txt.
CMD gunicorn -b :$PORT main:app
app.yaml
runtime: custom
env: flex
rquirements.txt
gunicorn==19.1.1
Flask==1.0.2
WeasyPrint>=0.34
main.py
from flask import Flask
from weasyprint import *
app = Flask(__name__)
#app.route('/')
def hello():
return 'Success!'
if __name__ == '__main__':
app.run(host='127.0.0.1', port=8080)
Note that as WeasyPrint documentation mentions, platform's packages (such as cairo, Pango and GDK-PixBuf) must be installed separately. They are installed with the following command I added in the Dockerfile:
RUN apt-get update && apt-get -y install build-essential python3-dev python3-pip python3-setuptools python3-wheel python3-cffi libcairo2 libpango-1.0-0 libpangocairo-1.0-0 libgdk-pixbuf2.0-0 libffi-dev shared-mime-info

Related

docker compose not starting 2 applications in 2 different location on same server

i have UAT and production setup on same server with identical code only the backend service is exposed on different ports.
i use the docker-compose up --build to start the application
but i am unable to start the uat and production application at same time.
first i go to the uat folder and run docker-compose up --build the application starts and same is visible in docker ps as well
but then i go to prod folder and issue docker-compose up --build it starts the service but the uat service goes down automatically
ideally when the code is in 2 different places it should behave as 2 different application and should be independent but its not happening.
docker-compose.yml
version: "2.2"
services:
webbackend:
build: .
network_mode: "host"
container_name: atms-webapp-backend-test
restart: always
volumes:
- .:/code
command:
"python3.7 app.py --port=5002"
# "gunicorn --workers=2 --bind=0.0.0.0:9000 utootuweb.wsgi:application"
Dockerfile
FROM ubuntu:18.04
WORKDIR /code
ADD . /code
RUN apt-get update && apt-get upgrade -y && apt-get clean
ENV NODE_VERSION=16.16.0
RUN apt-get install -y curl
RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
ENV NVM_DIR=/root/.nvm
RUN . "$NVM_DIR/nvm.sh" && nvm install ${NODE_VERSION}
RUN . "$NVM_DIR/nvm.sh" && nvm use v${NODE_VERSION}
RUN . "$NVM_DIR/nvm.sh" && nvm alias default v${NODE_VERSION}
ENV PATH="/root/.nvm/versions/node/v${NODE_VERSION}/bin/:${PATH}"
RUN node --version
RUN npm --version
WORKDIR /code/frontend
RUN npm install
RUN npm run build
# RUN npm install -g serve
# RUN serve -s build
# RUN yarn
# RUN yarn build
# # Upgrade installed packages
# RUN apt-get update && apt-get upgrade -y && apt-get clean
# # Python package management and basic dependencies
#
# RUN apt-get install -y curl python3.7 python3.7-dev python3.7-distutils curl
# # node install
# RUN curl -sL https://deb.nodesource.com/setup_18.x | bash -
# RUN apt-get install -y nodejs
# # npm install
# RUN apt install -y npm
# RUN npm install -g npm#latest
#
# Python package management and basic dependencies
RUN apt-get install -y curl python3.7 python3.7-dev python3.7-distutils
# Register the version in alternatives
RUN update-alternatives --install /usr/bin/python python /usr/bin/python3.7 1
# Set python 3 as the default python
RUN update-alternatives --set python /usr/bin/python3.7
RUN apt-get install -y build-essential python3.7 python3.7-dev python3-pip python3.7-venv libgl1
RUN apt-get install -y git
# update pip
RUN python3.7 -m pip install pip --upgrade
RUN python3.7 -m pip install wheel
# build frontend
# WORKDIR /code/frontend
#
# RUN npm install
# RUN npm run build
# install backend code
WORKDIR /code
RUN python3.7 -m pip install -r requirements.txt
WORKDIR /code/backend
CMD ["python3" ,"app.py"]
i have the same code base in 2 folders uat and prod
only the port is different
docker --version
Docker version 20.10.10, build b485636
docker-compose --version
docker-compose version 1.29.2, build 5becea4c
I changed the container name in docker-compose.yml to unique name for uat and prod as well just to make sure that container name is not the issue

Docker container with .net and react app not loading client

To Begin I have created .net core 6 project with react.js from visual studio 2022.
I have added docker to my project as well.
I have been following this tutorial.
"Quickstart: Use Docker with a React Single-page App in Visual Studio"
https://learn.microsoft.com/en-us/visualstudio/containers/container-tools-react?view=vs-2022
I came to the point where I'm able to build my image however when I'm starting my docker container it is running only my backend api, react app/client is not loading at all.
Any Ideas?
here is how my docker file looks like
#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.
FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
RUN apt-get update
RUN apt-get install -y curl
RUN apt-get install -y libpng-dev libjpeg-dev curl libxi6 build-essential libgl1-mesa-glx
RUN curl -sL https://deb.nodesource.com/setup_lts.x | bash -
RUN apt-get install -y nodejs
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
RUN apt-get update
RUN apt-get install -y curl
RUN apt-get install -y libpng-dev libjpeg-dev curl libxi6 build-essential libgl1-mesa-glx
RUN curl -sL https://deb.nodesource.com/setup_lts.x | bash -
RUN apt-get install -y nodejs
WORKDIR /src
COPY ["admin_tool_api_ui/admin_tool_api_ui.csproj", "admin_tool_api_ui/"]
RUN dotnet restore "admin_tool_api_ui/admin_tool_api_ui.csproj"
COPY . .
WORKDIR "/src/admin_tool_api_ui"
RUN dotnet build "admin_tool_api_ui.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "admin_tool_api_ui.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "admin_tool_api_ui.dll"]
First of all, I feel your pain. Thanks to Microsoft documentation I'm now a person with high level of patiency, I learned to accept things as they are in life.
Second, I think you are missing the front-end build, which is this piece:
FROM node:16 AS build-web
COPY ./admin_tool_api_ui/ClientApp/package.json /admin_tool_api_ui/ClientApp/package.json
COPY ./admin_tool_api_ui/ClientApp/package-lock.json /admin_tool_api_ui/ClientApp/package-lock.json
WORKDIR /admin_tool_api_ui/ClientApp
RUN npm ci
COPY ./admin_tool_api_ui/ClientApp/ /admin_tool_api_ui/ClientApp
RUN npm run build
So your final Dockerfile can be something like this:
#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.
FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
RUN apt-get update
RUN apt-get install -y curl
RUN apt-get install -y libpng-dev libjpeg-dev curl libxi6 build-essential libgl1-mesa-glx
RUN curl -sL https://deb.nodesource.com/setup_lts.x | bash -
RUN apt-get install -y nodejs
WORKDIR /src
COPY ["admin_tool_api_ui/admin_tool_api_ui.csproj", "admin_tool_api_ui/"]
RUN dotnet restore "admin_tool_api_ui/admin_tool_api_ui.csproj"
COPY . .
WORKDIR "/src/admin_tool_api_ui"
RUN dotnet build "admin_tool_api_ui.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "admin_tool_api_ui.csproj" -c Release -o /app/publish
FROM node:16 AS build-web
COPY ./admin_tool_api_ui/ClientApp/package.json /admin_tool_api_ui/ClientApp/package.json
COPY ./admin_tool_api_ui/ClientApp/package-lock.json /admin_tool_api_ui/ClientApp/package-lock.json
WORKDIR /admin_tool_api_ui/ClientApp
RUN npm ci
COPY ./admin_tool_api_ui/ClientApp/ /admin_tool_api_ui/ClientApp
RUN npm run build
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
COPY --from=build-web /admin_tool_api_ui/ClientApp/build ./ClientApp/build
ENTRYPOINT ["dotnet", "admin_tool_api_ui.dll"]
I just set property "Copy to Output directory" = "Copy always" for all React related files ClientApp directory hierarchy via "Properties" dialog.

Dockerize a React-Django project where the frontend is served from Django

I am serving a react app from within Django, and I trying to deploy it using docker-compose up -d --build.
My project directory is as follows:
root
├──project (django)
| ├──frontend/ # react project is here
| ├──project/
| ├──static/
| ├──Dockerfile //Dockerfile for backend image
| ├──entrypoint.sh
| ├──manage.py
| ├──requirements.txt
└──docker-compose.yaml
Here is my current deploy script:
# pull the official base image
FROM python:3.8.12-bullseye
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install dependencies
RUN apt-get update
COPY /requirements.txt /usr/src/app
RUN pip install -r requirements.txt
# set work directory
WORKDIR ~/usr/src/app
COPY package.json ./
COPY package-lock.json ./
RUN npm install --silent
RUN npm install react-scripts#3.4.1 -g --silent
RUN npm run dev
# set work directory
WORKDIR /usr/src/app
# copy project
COPY . /usr/src/app/
# run entrypoint.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
The error I get
> => ERROR [12/18] COPY package.json ./
> 0.0s => ERROR [13/18] COPY package-lock.json ./ 0.0s ------
> > [12/18] COPY package.json ./:
> ------
> ------
> > [13/18] COPY package-lock.json ./:
> ------ failed to compute cache key: "/package-lock.json" not found: not found
I edited your Dockerfile, try if this works:
# pull the official base image
FROM python:3.8.12-bullseye
RUN apt-get update
COPY . ./usr/src/app
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install python dependencies
RUN pip install -r requirements.txt
WORKDIR /usr/src/app/frontend
RUN npm install --silent
RUN npm install react-scripts#3.4.1 -g --silent
RUN npm run dev
# run entrypoint.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
The issue is that package.json and package-lock.json are not present in the directory where you run docker build, but (probably) in your frontend subdirectory.
Changing those two lines to:
COPY frontend/package.json ./
COPY frontend/package-lock.json ./
should work. But better yet, since you're copying everything anyway, you can move that to the top:
# pull the official base image
FROM python:3.8.12-bullseye
# set work directory
WORKDIR /usr/src/app
# copy project
COPY . .
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install dependencies
RUN apt-get update
RUN apt-get update \
&& apt-get install -y curl \
&& curl --silent --location https://deb.nodesource.com/setup_12.x | bash - \
&& apt-get install -y nodejs \
&& npm install --silent\
&& npm install react-scripts#3.4.1 -g --silent
RUN pip install -r requirements.txt
# set work directory
WORKDIR /usr/src/app/frontend
RUN npm install --silent
RUN npm install react-scripts#3.4.1 -g --silent
RUN npm run dev
# set work directory
WORKDIR /usr/src/app
# run entrypoint.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
I'm not sure what your needs are, but for a production environment I would suggest separating the frontend and Django application into different containers. Backend applications have very different scaling and hardware needs than frontend applications. You can still package it into one application using Docker-compose for example.

Docker : Unable to find local grunt

I try to dockerise an old angularJS app but I hang at a problem. I have the impression that when docker dials up my volume it overwrites all that has been done previously.
My image is built successfully but when I run it I have this error:Fatal error: Unable to find local grunt.
My goal is to be able to build my app while keeping the hot reload.
Dockerfile :
FROM mhart/alpine-node:6 as builder
# Confirm versions
RUN node -v
RUN npm -v
# Add
COPY package.json /usr/src/app/package.json
COPY bower.json /usr/src/app/bower.json
COPY Gruntfile.js /usr/src/app/Gruntfile.js
COPY .bowerrc /usr/src/app/.bowerrc
# Define app as root dir
WORKDIR /usr/src/app
# add app
#COPY . /usr/src/app
# Install sass & compass
RUN apk update && \
apk upgrade
RUN apk add --update \
ruby \
ruby-irb \
ruby-dev \
ruby-rdoc \
libffi-dev \
build-base
RUN gem install \
sass \
compass
# Install Perl
RUN apk add perl
# Install Git (rquired for angular dep)
RUN apk add git
# Install Yarn
RUN npm install -g yarn
RUN yarn -v
# Install dependencies
RUN npm install bower -g\
&& npm install -g grunt-cli \
&& yarn add grunt-contrib-imagemin \
&& yarn
# Build
RUN bower install --allow-root
EXPOSE 9000 35729
CMD [ "grunt", "--force" ,"server" ]
**docker-compose : **
version: '3.7'
services:
app-dev:
container_name: app-dev
build:
context: .
dockerfile: Dockerfile-dev
volumes:
- .:/usr/src/app/
ports:
- '9000:9000'
- '35729:35729'
restart: always
Doesn't look like you're running npm install on the docker image, which means that none of your packages from your package.json are going to present.
You should, in general, exclude node_modules from the docker build context with a .dockerignore, as you're going to want to rebuild your dependencies inside the container. A dependent module can, for instance, need to compile a native module for something. For instance, node-sass has a compiled component.

Google Managed VM deployment error

FROM google/debian:wheezy
MAINTAINER mchouan#gpartner.eu
# Fetch and install Node.js
RUN apt-get update -y && apt-get install --no-install-recommends -y -q curl python build-essential git ca-certificates
RUN mkdir /nodejs && curl http://nodejs.org/dist/v0.12.0/node-v0.12.0-linux-x64.tar.gz | tar xvzf - -C /nodejs --strip-components=1
# Add Node.js installation to PATH
ENV PATH $PATH:/nodejs/bin
# Install redis
RUN apt-get install -y redis-server
# Install supervisor
RUN apt-get install -y supervisor
# Add Node.js installation to PATH, and set
# the current working directory to /app
# so future commands in this Dockerfile are easier to write
WORKDIR /app
ENV NODE_ENV development
ADD package.json /app/
# RUN npm install
# Adds app source
ADD . /app
ADD supervisord.conf /etc/supervisor/conf.d/supervisord.conf
CMD ["/usr/bin/supervisord"]
Hello,
I have been trying to deploy an app on a Google Managed VM based on a Node.JS runtime. However, this seems to be a little confused to me as I still get this error while deploying :
ERROR: (gcloud.preview.app.deploy) Not enough VMs ready (0/1 ready, 1 still deploying). Deployed Version: 280815s.386747973874670759
We have been able to deploy it one week ago, so this error does not occur every time (it this reccuring for 2 days now). I guess there is something wrong with our configuration, maybe regarding our app.yaml or our Dockerfile, but I still can't figure out what is going on. Furthermore, the VM is created but is inaccessible, the SSH connection gets lost. I was wondering if this was not coming from Google. Do you have any idea ?
Here is the app.yaml :
module: default
runtime: custom
api_version: 1
vm: true
# manual_scaling:
# instances: 1
# [START scaling]
automatic_scaling:
min_num_instances: 1
max_num_instances: 5
cool_down_period_sec: 60
cpu_utilization:
target_utilization: 0.5
# [END scaling]
health_check:
enable_health_check: False
check_interval_sec: 20
timeout_sec: 4
unhealthy_threshold: 2
healthy_threshold: 2
restart_threshold: 60
handlers:
- url: /.*
script: server.js
Here is the Dockerfile :
FROM google/debian:wheezy
MAINTAINER mchouan#gpartner.eu
# Fetch and install Node.js
RUN apt-get update -y && apt-get install --no-install-recommends -y -q curl python build-essential git ca-certificates
RUN mkdir /nodejs && curl http://nodejs.org/dist/v0.12.0/node-v0.12.0-linux-x64.tar.gz | tar xvzf - -C /nodejs --strip-components=1
# Add Node.js installation to PATH
ENV PATH $PATH:/nodejs/bin
# Install redis
RUN apt-get install -y redis-server
# Install supervisor
RUN apt-get install -y supervisor
# Add Node.js installation to PATH, and set
# the current working directory to /app
# so future commands in this Dockerfile are easier to write
WORKDIR /app
ENV NODE_ENV development
ADD package.json /app/
# RUN npm install
# Adds app source
ADD . /app
ADD supervisord.conf /etc/supervisor/conf.d/supervisord.conf
CMD ["/usr/bin/supervisord"]
Here is the command we run in order to deploy the app :
gcloud preview app deploy $DIR/app.yaml --version="$version" --force
Thank you for helping.
You don't seem to be exposing any ports on the container. For managed VMS, you should expose port 8080, try adding :
EXPOSE 8080

Resources