Github Actions CICD to build a SQL Server container fails but will work Locally - sql-server

I started working on Github actions to build some sample containers instead of doing my builds locally. Less reliance on my machines, etc. It seems though that building and pushing on my MAC works, but creating an Action to do it will fail.
When doing tests locally, I did make sure I have an updated Dockerfile to ensure that everything is built as needed correctly, but part of me is thinking it is related to the OS building with the Github action, but I am trying to understand it more.
The error I get is:
Error: failed to solve: executor failed running [/bin/sh -c ( /opt/mssql/bin/sqlservr & ) | grep -q "Service Broker manager has started" && /opt/sqlpackage/sqlpackage /a:Import /tsn:. /tdn:${DBNAME} /tu:sa /tp:$SA_PASSWORD /sf:/tmp/db.bacpac && rm /tmp/db.bacpac && pkill sqlservr]: exit code: 1
My workflow action is:
name: Docker Image CI MSSQL
on:
schedule:
- cron: '0 6 * * *'
push:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout#v3
-
name: Prepare Variables
id: prepare
run: |
DOCKER_IMAGE=fallenreaper/eve-mssql
VERSION=$(date -u +'%Y%m%d')
if [ "${{ github.event_name }}" = "schedule" ]; then
VERSION=nightly
fi
TAGS="${DOCKER_IMAGE}:${VERSION}"
if [[ $VERSION =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then
TAGS="$TAGS --tag ${DOCKER_IMAGE}:latest"
fi
echo ::set-output name=docker_image::${DOCKER_IMAGE}
echo ::set-output name=version::${VERSION}
echo ::set-output name=tags::${TAGS}
-
name: Login to DockerHub
if: success() && github.event_name != 'pull_request'
uses: docker/login-action#v1
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
-
name: Docker Build & Push
if: success() && github.event_name != 'pull_request'
uses: docker/build-push-action#v2
with:
push: true
tags: ${{steps.prepare.outputs.tags}}
context: mssql/.
So i was thinking this would work. The Dockerfile I am building uses mcr.microsoft.com/mssql/server:2017-latest which I figured would work.
Dockerfile:
FROM mcr.microsoft.com/mssql/server:2017-latest
ENV ACCEPT_EULA=Y
ENV SA_PASSWORD=Password123!
ENV MSSQL_PID=Developer
ARG DBNAME=evesde
EXPOSE 1433
RUN apt-get update \
&& apt-get install unzip -y
RUN wget -progress=bar:force -q -O sqlpackage.zip https://go.microsoft.com/fwlink/?linkid=2165213 \
&& unzip -qq sqlpackage.zip -d /opt/sqlpackage \
&& chmod +x /opt/sqlpackage/sqlpackage
RUN wget -o /tmp/db.bacpac https://www.fuzzwork.co.uk/dump/mssql-latest.bacpac
RUN ( /opt/mssql/bin/sqlservr & ) | grep -q "Service Broker manager has started" \
&& /opt/sqlpackage/sqlpackage /a:Import /tsn:. /tdn:${DBNAME} /tu:sa /tp:$SA_PASSWORD /sf:/tmp/db.bacpac \
&& rm /tmp/db.bacpac \
&& pkill sqlservr
EDIT As I keep reading various documents, I am trying to understand and test various methods to see if i can spawn a build. I was thinking that simulating a MAC might be useful, so i had also attempted to use the action: runs-on: macos:latest to see if that would solve it, but i havent seen gains as the run docker-login-action#v1 will fail.

Looking Through each line item I ended up the the following Dockerfile.
FROM mcr.microsoft.com/mssql/server:2017-latest
ENV ACCEPT_EULA=Y
ENV SA_PASSWORD=Password123!
ENV MSSQL_PID=Developer
ENV DBNAME=evesde
EXPOSE 1433
RUN apt-get update \
&& apt-get install unzip -y
RUN wget -progress=bar:force -q -O sqlpackage.zip https://go.microsoft.com/fwlink/?linkid=2165213 \
&& unzip -qq sqlpackage.zip -d /opt/sqlpackage \
&& chmod +x /opt/sqlpackage/sqlpackage
RUN wget -progress=bar:force -q -O /mssql-latest.bacpac https://www.fuzzwork.co.uk/dump/mssql-latest.bacpac
RUN ( /opt/mssql/bin/sqlservr & ) | grep -q "Service Broker manager has started" \
&& /opt/sqlpackage/sqlpackage /a:Import /tsn:. /tdn:$DBNAME /tu:sa /tp:$SA_PASSWORD /sf:/mssql-latest.bacpac \
&& pkill sqlservr
#Cleanup of created bulk files no longer needed.
RUN rm mssql-latest.bacpac sqlpackage.zip
The Main difference is where the bacpac file is stored. It seemed there were hiccups when creating that file. After adjusting the location, and breaking apart the import list, it seemed to work.
Notes: When the file was created in TMP, it seemed to be partially created, and so it was recognizing the existing file but it was corrupt. Not sure if there were size limits, but it was an observation. Putting it in the / directory of the build gave me the access and complete file so i needed to adjust the /sf reference.
Lastly because there were hanging files which no longer were needed, I found it best to just do a little cleanup by deleting both the sqlpackage and bacpac files.

Suggesting to investigate grep -q "Service Broker manager has started" &&
Maybe this fails, because you start /opt/mssql/bin/sqlservr then immediately check that it started. In most cases it takes few seconds to start.
To test if my thesis is correct.Suggesting to insert few sleep 10 or timeout commands in strategic places.

Related

Docker still deletes databases on rebuild

I have some project in docker. When i recreating docker app, docker still deleting old databases in localhost. I did not find any solution on internet. Is there someone who knows how this problem solved?
Thanks for the responding
There is my docker file
FROM php:7.2-apache
ENV DOCKER=1
ENV MASTER_URL_DOCKERFILE='http://website/'
RUN docker-php-ext-install mysqli pdo_mysql
RUN apt-get update -y && apt-get install -y \
libpng-dev \
libwebp-dev \
libjpeg62-turbo-dev \
libpng-dev libxpm-dev \
libfreetype6-dev
RUN docker-php-ext-configure gd \
--with-gd \
--with-webp-dir \
--with-jpeg-dir \
--with-png-dir \
--with-zlib-dir \
--with-xpm-dir \
--with-freetype-dir
RUN docker-php-ext-install gd
RUN docker-php-ext-install calendar && docker-php-ext-configure calendar
RUN a2enmod rewrite
RUN ln -s /etc/apache2/mods-available/expires.load /etc/apache2/mods-enabled/
COPY core /var/www/core/
COPY chainway/src /var/www/html/
COPY chainway/docker/app/ /usr/local/bin/
RUN service apache2 restart
And there is how i running containers
#!/bin/bash
DIR=$(dirname $0)
cd $DIR
wget –V
wget -O "$DIR/docker/db/dump.sql" "http://website/senddatabasetolocalhost.php?auth=authkey"
docker-compose stop
docker-compose build
docker-compose up -d
You will have to use a volumes in docker-compose.yml like this :
volumes:
- $PWD/my_sql:/var/lib/mysql
You can store your db data using volumes.
Add to your docker-compose.yml file in mysql section:
mysql:
volumes:
- db_data:/var/lib/mysql
And to the end of the file:
volumes:
db_data:

How to make site URL an external parameter for React application?

I have the React app. with 3 versions: for developement, testing and production.
They only differ in the URL that is used for the login (different WordPress site).
How do I make the react app agnostic/configurable at runtime
and save the need to generate 3 versions?
Just use
window.location.host // need to add http/s
to get the URL.
Many other parameters can be found using: URLSearchParams, see URLSearchParams
For those that use a Docker container, it can be done with environment variables.
My situation:
I made my react app in Visual Studio with template 'ASP.NET Core with React.js and Redux'. It is placed in a docker container which is deployed in Kubernetes.
It took me almost half a day but I managed to do it :)
First I found this post and especially the comment from Patrick Lee Scott is interesting:
https://levelup.gitconnected.com/handling-multiple-environments-in-react-with-docker-543762989783
Comment from Patrick Lee Scott:
https://patrickleet.medium.com/another-option-build-with-dummy-values-like-replace-api-url-and-then-use-an-entrypoint-sh-db053a799167
The comment is a good start but doesn't show the complete solution.
First I tested the script (and try to figure out what it was doing).
During the testing I found out that the 'cat /proc/self/environ' was not working, I replaced it with xargs -0 -L1 -a /proc/self/environ.
Second I had trouble getting the script to run via ENTRYPOINT, I figured out that the script needed to begin with: #!/bin/bash
Third, I added the original ENTRYPOINT at the bottom of the script.
Here is the modified script of Patrick Lee Scott:
appEntryPoint.sh:
#!/bin/bash
echo "Inserting env variables"
for file in ./ClientApp/build/static/js/*.js
do
echo "env sub for $file"
list="$(xargs -0 -L1 -a /proc/self/environ | awk -F= '{print $1}')"
echo "$list" | while read -r line; do
export REPLACE="REPLACE_$line"
export VALUE=$(eval "echo \"\$$line\"")
#echo "replacing ${REPLACE} with ${VALUE} in $file"
sed -i "s~${REPLACE}~${VALUE}~g" $file
unset REPLACE
unset VALUE
done
done
dotnet My.DotNet.ReactApp.dll
To make the answer complete, I will list here my Dockerfile:
Dockerfile:
FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app/ClientApp
EXPOSE 80
EXPOSE 443
RUN echo "Acquire::Check-Valid-Until \"false\";\nAcquire::Check-Date \"false\";" | cat > /etc/apt/apt.conf.d/10no--check-valid-until && apt-get update -yq \
&& apt-get install -y curl \
&& apt-get install -y libpng-dev libjpeg-dev curl libxi6 build-essential libgl1-mesa-glx \
&& curl -sL https://deb.nodesource.com/setup_lts.x | bash - \
&& apt-get install -y nodejs
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
RUN echo "Acquire::Check-Valid-Until \"false\";\nAcquire::Check-Date \"false\";" | cat > /etc/apt/apt.conf.d/10no--check-valid-until && apt-get update -yq \
&& apt-get install -y curl \
&& apt-get install -y libpng-dev libjpeg-dev curl libxi6 build-essential libgl1-mesa-glx \
&& curl -sL https://deb.nodesource.com/setup_lts.x | bash - \
&& apt-get install -y nodejs
WORKDIR /app/ClientApp
COPY /My.DotNet.ReactApp/ClientApp/package*.json ./
RUN npm install --silent
COPY /My.DotNet.ReactApp/ClientApp ./
RUN npm run build
WORKDIR /app/publish/ClientApp
RUN cp -r /app/ClientApp/build .
WORKDIR /app
COPY /My.DotNet.ReactApp ./
RUN dotnet restore "My.DotNet.ReactApp.csproj"
RUN dotnet build "My.DotNet.ReactApp.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "My.DotNet.ReactApp.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
COPY ./appEntryPoint.sh ./
RUN chmod +x appEntryPoint.sh
ENTRYPOINT ["/app/appEntryPoint.sh"]
What you now have to do is put in your .env file placeholders:
.env.production
REACT_APP_API_ENDPOINT=REPLACE_REACT_APP_API_ENDPOINT
REACT_APP_API_SOME_OTHER_URL=REPLACE_REACT_APP_API_SOME_OTHER_URL
Now you can set the real values for the react variables as environment variables on the container, the script reads the environment variables from the container and will replace all values that begin with "REPLACE_"
So in this case we need to set these environment variables on the container used for production:
REACT_APP_API_ENDPOINT=https://prod.endpoint.com
REACT_APP_API_SOME_OTHER_URL=https://prod.url.com
And for the test environment:
REACT_APP_API_ENDPOINT=https://test.endpoint.com
REACT_APP_API_SOME_OTHER_URL=https://test.url.com
Use .env file. Check out this link for installation. At the end you will have such kind of structure in you app folder

How to deploy react app in ubuntu server with bitbucket pipeline

I want to build and deploy my react app from my master branch I have managed to automate build but unable to transfer it into my server find my pipeline code below, I receive below error
pipelines:
default:
- step:
name: Build Title
script:
- npm install
- npm run build
- mkdir packaged
- tar -czvf packaged/package-${BITBUCKET_BUILD_NUMBER}.tar.gz -C build .
artifacts:
- packaged/**
- step:
name: Deploy to Web
image: alpine
trigger: manual
deployment: production
script:
- mkdir upload
- tar -xf packaged/package-${BITBUCKET_BUILD_NUMBER}.tar.gz -C upload
- apk update && apk add openssh rsync
- rsync -a -e "ssh -o StrictHostKeyChecking=no" --delete upload/ $USERNAME#$SERVER:html/temp/react-${BITBUCKET_BUILD_NUMBER}
- ssh -o StrictHostKeyChecking=no $USERNAME#$SERVER "rm -r html/www"
- ssh -o StrictHostKeyChecking=no $USERNAME#$SERVER "mv 'html/temp/react-${BITBUCKET_BUILD_NUMBER}' 'var/www/html/deploy'"
- ssh -o StrictHostKeyChecking=no $USERNAME#$SERVER "chmod -R u+rwX,go+rX,go-w html/www"
Error Log
+ rsync -a -e "ssh -o StrictHostKeyChecking=no" --delete upload/ $USERNAME#$SERVER:html/temp/react-${BITBUCKET_BUILD_NUMBER}
load pubkey "/opt/atlassian/pipelines/agent/ssh/id_rsa": invalid format
rsync: mkdir "/$USERNAME/html/temp/react-15" failed: No such file or directory (2)
rsync error: error in file IO (code 11) at main.c(675) [Receiver=3.1.2]
I noticed. this to happen only on Alpine-based images. For example, Debian images work fine. It also happens on Buddy, not just on Bitbucket. I expect this is upstream Alpine bug/issue.
I was using that same script as well, below is what ended up working for me after a lot of banging my head against the screen, updating the image and adding upload artifacts seemed to be the kicker.
default:
- step:
name: Build React Project
script:
- npm install
- npm run-script build
- mkdir packaged
- tar -czvf packaged/package-${BITBUCKET_BUILD_NUMBER}.tar.gz -C build .
artifacts:
- packaged/**
- step:
name: Deploy to Web
image: atlassian/default-image:latest
trigger: manual
deployment: production
script:
- mkdir upload
- tar -xf packaged/package-${BITBUCKET_BUILD_NUMBER}.tar.gz -C upload
- rsync -a --delete upload/ $USERNAME#$SERVER:/home/temp/react-${BITBUCKET_BUILD_NUMBER}
- ssh $USERNAME#$SERVER "rm -r /home/www"
- ssh $USERNAME#$SERVER "mv '/home/temp/react-${BITBUCKET_BUILD_NUMBER}' '/home/www'"
- ssh $USERNAME#$SERVER "chmod -R u +rwX,go+rX,go-w /home/www"
artifacts:
- upload/**

How to deploy a spring boot project to Google App Engine using Google Cloud Build?

I am building my spring boot application using maven and google cloud build but somehow I get different deployment results whether I run locally using mvn appengine:run or that I deploy using Cloud Build.
If I run locally using mvn appengine:run, I can access my controller as expected. Using Cloud Build, I get a 404 error.
My cloudbuild.yaml is the following:
steps:
- name: 'gcr.io/cloud-builders/mvn'
args: ['package']
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy', 'target/myapp/WEB-INF/appengine-web.xml']
How would you recommend configuring a cloud build in order to build and deploy a spring boot application on google app engine?
After additionnal digging, the issue seems to be related to some kind of error returned:
javax.servlet.ServletContext log: 2 Spring WebApplicationInitializers detected on classpath
I do not get this message in the stack trace when deploying from local machine using mvn appengine:deploy
My question still remains, how do I go about creating a cloudbuild.yaml that can invoke mvn appengine:deploy ?
In order to build a spring boot project and deploy it to google appengine using Google Cloud Build. I ended up having to first build a "builder" image using the image below and reference this image when performing my actual application builds.
Dockerfile
FROM debian:stretch
#
# Google Cloud SDK installation
# https://cloud.google.com/sdk/docs/quickstart-debian-ubuntu
RUN apt-get update -y && \
apt-get install \
apt-utils \
dialog \
gnupg \
lsb-release \
curl -y && \
export CLOUD_SDK_REPO="cloud-sdk-$(lsb_release -c -s)" && \
echo "deb http://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && \
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && \
apt-get update -y && \
apt-get install google-cloud-sdk -y
# Install all available components
RUN apt-get install google-cloud-sdk \
google-cloud-sdk \
google-cloud-sdk-app-engine-go \
google-cloud-sdk-app-engine-java \
google-cloud-sdk-app-engine-python \
google-cloud-sdk-app-engine-python-extras \
google-cloud-sdk-bigtable-emulator \
google-cloud-sdk-cbt \
google-cloud-sdk-datastore-emulator \
google-cloud-sdk-cloud-build-local \
google-cloud-sdk-datalab \
kubectl \
google-cloud-sdk-pubsub-emulator -y
#
# OpenJDK installation
# https://linuxhint.com/install-openjdk-8-on-debian-9-stretch/
RUN apt-get install openjdk-8-jdk -y
#
# MAVEN installation
# https://github.com/carlossg/docker-maven/blob/f581ea002e5d067deb6213c00a4d217297cad469/jdk-8/Dockerfile
ARG MAVEN_VERSION=3.5.4
ARG USER_HOME_DIR="/root"
ARG SHA=ce50b1c91364cb77efe3776f756a6d92b76d9038b0a0782f7d53acf1e997a14d
ARG BASE_URL=https://apache.osuosl.org/maven/maven-3/${MAVEN_VERSION}/binaries
RUN mkdir -p /usr/share/maven /usr/share/maven/ref \
&& curl -fsSL -o /tmp/apache-maven.tar.gz ${BASE_URL}/apache-maven-${MAVEN_VERSION}-bin.tar.gz \
&& echo "${SHA} /tmp/apache-maven.tar.gz" | sha256sum -c - \
&& tar -xzf /tmp/apache-maven.tar.gz -C /usr/share/maven --strip-components=1 \
&& rm -f /tmp/apache-maven.tar.gz \
&& ln -s /usr/share/maven/bin/mvn /usr/bin/mvn
ENV MAVEN_HOME /usr/share/maven
ENV MAVEN_CONFIG "$USER_HOME_DIR/.m2"
WORKDIR /workspace
cloudbuild.yaml
# In this directory, run the following command to build this builder.
# $ gcloud builds submit . --config=cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '--tag=gcr.io/$PROJECT_ID/gcloud-maven', '.']
# Simple sanity check: invoke java to confirm that it was installed correctly.
- name: 'gcr.io/$PROJECT_ID/gcloud-maven'
args: ['java', '-version']
# Simple sanity check: invoke gcloud to confirm that it was installed correctly.
- name: 'gcr.io/$PROJECT_ID/gcloud-maven'
args: ['gcloud', 'projects', 'list']
# Simple sanity check: invoke maven to confirm that it was installed correctly.
- name: 'gcr.io/$PROJECT_ID/gcloud-maven'
args: ['mvn', '--version']
images: ['gcr.io/$PROJECT_ID/gcloud-maven']
timeout: 1200s
My spring boot project's cloudbuild.yaml now references this image:
steps:
- name: 'gcr.io/$PROJECT_ID/gcloud-maven'
args: ['mvn', 'appengine:deploy']
I will try to put this docker image on dockerhub and github for others to find. I will also appreciate people more familiar with docker and linux to help improve this image to reduce its size. (For example, use Alpine instead of Debian or Debian Stretch Slim). In the meantime, I hope this helps others like me.

Starting and populating a Postgres container in Docker

I have a Docker container that contains my Postgres database. It's using the official Postgres image which has a CMD entry that starts the server on the main thread.
I want to populate the database by running RUN psql –U postgres postgres < /dump/dump.sql before it starts listening to queries.
I don't understand how this is possible with Docker. If I place the RUN command after CMD, it will of course never be reached because Docker has finished reading the Dockerfile. But if I place it before the CMD, it will run before psql even exists as a process.
How can I prepopulate a Postgres database in Docker?
After a lot of fighting, I have found a solution ;-)
For me was very useful a comment posted here: https://registry.hub.docker.com/_/postgres/ from "justfalter"
Anyway, I have done in this way:
# Dockerfile
FROM postgres:9.4
RUN mkdir -p /tmp/psql_data/
COPY db/structure.sql /tmp/psql_data/
COPY scripts/init_docker_postgres.sh /docker-entrypoint-initdb.d/
db/structure.sql is a sql dump, useful to initialize the first tablespace.
Then, the init_docker_postgres.sh
#!/bin/bash
# this script is run when the docker container is built
# it imports the base database structure and create the database for the tests
DATABASE_NAME="db_name"
DB_DUMP_LOCATION="/tmp/psql_data/structure.sql"
echo "*** CREATING DATABASE ***"
# create default database
gosu postgres postgres --single <<EOSQL
CREATE DATABASE "$DATABASE_NAME";
GRANT ALL PRIVILEGES ON DATABASE "$DATABASE_NAME" TO postgres;
EOSQL
# clean sql_dump - because I want to have a one-line command
# remove indentation
sed "s/^[ \t]*//" -i "$DB_DUMP_LOCATION"
# remove comments
sed '/^--/ d' -i "$DB_DUMP_LOCATION"
# remove new lines
sed ':a;N;$!ba;s/\n/ /g' -i "$DB_DUMP_LOCATION"
# remove other spaces
sed 's/ */ /g' -i "$DB_DUMP_LOCATION"
# remove firsts line spaces
sed 's/^ *//' -i "$DB_DUMP_LOCATION"
# append new line at the end (suggested by #Nicola Ferraro)
sed -e '$a\' -i "$DB_DUMP_LOCATION"
# import sql_dump
gosu postgres postgres --single "$DATABASE_NAME" < "$DB_DUMP_LOCATION";
echo "*** DATABASE CREATED! ***"
So finally:
# no postgres is running
[myserver]# psql -h 127.0.0.1 -U postgres
psql: could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
[myserver]# docker build -t custom_psql .
[myserver]# docker run -d --name custom_psql_running -p 5432:5432 custom_psql
[myserver]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce4212697372 custom_psql:latest "/docker-entrypoint. 9 minutes ago Up 9 minutes 0.0.0.0:5432->5432/tcp custom_psql_running
[myserver]# psql -h 127.0.0.1 -U postgres
psql (9.2.10, server 9.4.1)
WARNING: psql version 9.2, server version 9.4.
Some psql features might not work.
Type "help" for help.
postgres=#
# postgres is now initialized with the dump
Hope it helps!
For those who want to initialize a PostgreSQL DB with millions of records during the first run.
Import using *.sql dump
You can do simple sql dump and copy the dump.sql file into /docker-entrypoint-initdb.d/. The problem is speed. My dump.sql script is about 17MB (small DB - 10 tables with 100k rows in only one of them) and the initialization takes over a minute (!). That is unacceptable for local development / unit test, etc.
Import using binary dump
The solution is to make a binary PostgreSQL dump and use shell scripts initialization support.
Then the same DB is initialized in about 500ms instead of 1 minute.
1. Create the dump.pgdata binary dump of a DB named "my-db"
directly from within a container or your local DB
pg_dump -U postgres --format custom my-db > "dump.pgdata"
Or from host from running container (postgres-container)
docker exec postgres-container pg_dump -U postgres --format custom my-db > "dump.pgdata"
2. Create a Docker image with a given dump and initialization script
$ tree
.
├── Dockerfile
└── docker-entrypoint-initdb.d
├── 01-restore.sh
├── 02-small-updates.sql
└── dump.pgdata
$ cat Dockerfile
FROM postgres:11
COPY ./docker-entrypoint-initdb.d/ /docker-entrypoint-initdb.d/
$ cat docker-entrypoint-initdb.d/01-restore.sh
#!/bin/bash
file="/docker-entrypoint-initdb.d/dump.pgdata"
dbname=my-db
echo "Restoring DB using $file"
pg_restore -U postgres --dbname=$dbname --verbose --single-transaction < "$file" || exit 1
$ cat docker-entrypoint-initdb.d/02-small-updates.sql
-- some updates on your DB, for example for next application version
-- this file will be executed on DB during next release
UPDATE ... ;
3. Build an image and run it
$ docker build -t db-test-img .
$ docker run -it --rm --name db-test db-test-img
Alternatively, you can just mount a volume to /docker-entrypoint-initdb.d/ that contains all your DDL scripts. You can put in *.sh, *.sql, or *.sql.gz files and it will take care of executing those on start-up.
e.g. (assuming you have your scripts in /tmp/my_scripts)
docker run -v /tmp/my_scripts:/docker-entrypoint-initdb.d postgres
There is yet another option available that utilises Flocker:
Flocker is a container data volume manager that is designed to allow databases like PostgreSQL to easily run in containers in production. When running a database in production, you have to think about things like recovering from host failure. Flocker provides tools for managing data volumes across a cluster of machines like you have in a production environment. For example, as a Postgres container is scheduled between hosts in response to server failure, Flocker can automatically move its associated data volume between hosts at the same time. This means that when your Postgres container starts up on a new host, it has its data. This operation can be accomplished manually using the Flocker API or CLI, or automatically by a container orchestration tool that Flocker is integrates with, for example Docker Swarm, Kubernetes or Mesos.
I Followed the same solution which #damoiser , The only situation which was different was I wanted to import all dump data.
Please follow the solution below.(I have not done any kind of checks)
Dockerfile
FROM postgres:9.5
RUN mkdir -p /tmp/psql_data/
COPY db/structure.sql /tmp/psql_data/
COPY scripts/init_docker_postgres.sh /docker-entrypoint-initdb.d/
then the init_docker_postgres.sh script
#!/bin/bash
DB_DUMP_LOCATION="/tmp/psql_data/structure.sql"
echo "*** CREATING DATABASE ***"
psql -U postgres < "$DB_DUMP_LOCATION";
echo "*** DATABASE CREATED! ***"
and then you can build your image as
docker build -t abhije***/postgres-data .
docker run -d abhije***/postgres-data
My solution is inspired by Alex Dguez's answer which unfortunately doesn't work for me because:
I used pg-9.6 base image, and the RUN /docker-entrypoint.sh --help never ran through for me, which always complained with The command '/bin/sh -c /docker-entrypoint.sh -' returned a non-zero code: 1
I don't want to pollute the /docker-entrypoint-initdb.d dir
The following answer is originally from my reply in another post: https://stackoverflow.com/a/59303962/4440427. It should be noted that the solution is for restoring from a binary dump instead of from a plain SQL as asked by the OP. But it can be modified slightly to adapt to the plain SQL case
Dockerfile:
FROM postgres:9.6.16-alpine
LABEL maintainer="lu#cobrainer.com"
LABEL org="Cobrainer GmbH"
ARG PG_POSTGRES_PWD=postgres
ARG DBUSER=someuser
ARG DBUSER_PWD=P#ssw0rd
ARG DBNAME=sampledb
ARG DB_DUMP_FILE=example.pg
ENV POSTGRES_DB launchpad
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD ${PG_POSTGRES_PWD}
ENV PGDATA /pgdata
COPY wait-for-pg-isready.sh /tmp/wait-for-pg-isready.sh
COPY ${DB_DUMP_FILE} /tmp/pgdump.pg
RUN set -e && \
nohup bash -c "docker-entrypoint.sh postgres &" && \
/tmp/wait-for-pg-isready.sh && \
psql -U postgres -c "CREATE USER ${DBUSER} WITH SUPERUSER CREATEDB CREATEROLE ENCRYPTED PASSWORD '${DBUSER_PWD}';" && \
psql -U ${DBUSER} -d ${POSTGRES_DB} -c "CREATE DATABASE ${DBNAME} TEMPLATE template0;" && \
pg_restore -v --no-owner --role=${DBUSER} --exit-on-error -U ${DBUSER} -d ${DBNAME} /tmp/pgdump.pg && \
psql -U postgres -c "ALTER USER ${DBUSER} WITH NOSUPERUSER;" && \
rm -rf /tmp/pgdump.pg
HEALTHCHECK --interval=30s --timeout=30s --start-period=5s --retries=3 \
CMD pg_isready -U postgres -d launchpad
where the wait-for-pg-isready.sh is:
#!/bin/bash
set -e
get_non_lo_ip() {
local _ip _non_lo_ip _line _nl=$'\n'
while IFS=$': \t' read -a _line ;do
[ -z "${_line%inet}" ] &&
_ip=${_line[${#_line[1]}>4?1:2]} &&
[ "${_ip#127.0.0.1}" ] && _non_lo_ip=$_ip
done< <(LANG=C /sbin/ifconfig)
printf ${1+-v} $1 "%s${_nl:0:$[${#1}>0?0:1]}" $_non_lo_ip
}
get_non_lo_ip NON_LO_IP
until pg_isready -h $NON_LO_IP -U "postgres" -d "launchpad"; do
>&2 echo "Postgres is not ready - sleeping..."
sleep 4
done
>&2 echo "Postgres is up - you can execute commands now"
The above scripts together with a more detailed README are available at https://github.com/cobrainer/pg-docker-with-restored-db
I was able to load the data in by pre-pending the run command in the docker file with /etc/init.d/postgresql. My docker file has the following line which is working for me:
RUN /etc/init.d/postgresql start && /usr/bin/psql -a < /tmp/dump.sql
We for E2E test in which we need a database with structure and data already saved in the Docker image we have done the following:
Dockerfile:
FROM postgres:9.4.24-alpine
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD postgres
ENV PGDATA /pgdata
COPY database.backup /tmp/
COPY database_restore.sh /docker-entrypoint-initdb.d/
RUN /docker-entrypoint.sh --help
RUN rm -rf /docker-entrypoint-initdb.d/database_restore.sh
RUN rm -rf /tmp/database.backup
database_restore.sh:
#!/bin/sh
set -e
pg_restore -C -d postgres /tmp/database.backup
To create the image:
docker build .
To start the container:
docker run --name docker-postgres -d -p 5432:5432 <Id-docker-image>
This does not restore the database every time the container is booted. The structure and data of the database is already contained in the created Docker image.
We have based on this article, but eliminating the multistage:
Creating Fast, Lightweight Testing Databases in Docker
Edit: With version 9.4-alpine does not work now because it does not
run the database_restore.sh scrips. Use version 9.4.24-alpine
My goal was to have an image that contains the database - i. e. saving the time to rebuild it everytime I do docker run oder docker-compose up.
We would just have to manage to get the line exec "$#" out of docker-entrypoint.sh. So I added into my Dockerfile:
#Copy my ssql scripts into the image to /docker-entrypoint-initdb.d:
COPY ./init_db /docker-entrypoint-initdb.d
#init db
RUN grep -v 'exec "$#"' /usr/local/bin/docker-entrypoint.sh > /tmp/docker-entrypoint-without-serverstart.sh && \
chmod a+x /tmp/docker-entrypoint-without-serverstart.sh && \
/tmp/docker-entrypoint-without-serverstart.sh postgres && \
rm -rf /docker-entrypoint-initdb.d/* /tmp/docker-entrypoint-without-serverstart.sh

Resources