How can I restore a database to a influxdb container - database

Sorry for my ignorance. I have influx db running on docker with docker-compose as below.
influxdb:
image: influxdb:alpine
ports:
- 8086:8086
volumes:
- ./influxdb/config/influxdb.conf:/etc/influxdb/influxdb.conf:ro
- ./influxdb/data:/var/lib/influxdb
I need to restore the backup of a database from remote server to this Influxdb container. I have taken the backup as below from remote server.
influxd backup -database tech_db /tmp/tech_db
I read the documentation and couldn't find a way to restore the DB to docker container.Can anyone give me a pointer to how to do this.

I have also had the same issue. Looks like it is impossible because you are not able to kill influxd process in a container.
# Restoring a backup requires that influxd is stopped (note that stopping the process kills the container).
docker stop "$CONTAINER_ID"
# Run the restore command in an ephemeral container.
# This affects the previously mounted volume mapped to /var/lib/influxdb.
docker run --rm \
--entrypoint /bin/bash \
-v "$INFLUXDIR":/var/lib/influxdb \
-v "$BACKUPDIR":/backups \
influxdb:1.3 \
-c "influxd restore -metadir /var/lib/influxdb/meta -datadir /var/lib/influxdb/data -database foo /backups/foo.backup"
# Start the container just like before, and get the new container ID.
CONTAINER_ID=$(docker run --rm \
--detach \
-v "$INFLUXDIR":/var/lib/influxdb \
-v "$BACKUPDIR":/backups \
-p 8086 \
influxdb:1.3
)
More information is here

Related

Data loss on Azure Container Instance with mounted volume

I just created a container instance on azure with an sql server docker image and a mounted file sharing storage as a volume. The fact is that the container got stucked, so I restarted it.
After restart, all data was gone. When I restart a docker container locally, data keep existing because of volumes so I cannot understand the behaviour on azure.
Any clue about this?
Here is the cli command I run to create the container
az container create --resource-group myresource-rg \
--name project-test-db \
--image mcr.microsoft.com/mssql/server:2019-latest \
--location westus2 \
--ports 1433 \
--memory 5 \
--environment-variables SA_PASSWORD=Password ACCEPT_EULA=Y \
--ip-address public \
--azure-file-volume-account-name projectteststorageacc \
--azure-file-volume-account-key \MyKey \
--azure-file-volume-share-name project-test-file-share \
--azure-file-volume-mount-path /databases
Try editing your command as below
Use " " for around ACCEPT_EULA=Y and key-value pair as below. And replace SA_PASSWORD with MSSQL_SA_PASSWORD
--environment-variables "MSSQL_SA_PASSWORD=Password" "ACCEPT_EULA=Y" \
Required setting for the SQL Server image, a strong password that is at least 8 characters and meets the SQL Server password requirements. Given if you have set appropriate strong password and storage key already, the below commands works just fine for me. If the password doesn't meet SQL standards this container fails (restart loop).
PS /home/karthik> $Password = "MyStrongPassword"
PS /home/karthik> $key = "FO/R6WkZELhMzX02wi9KahtLtKppoSIJg/EcJLEnZajRm2uxXs0sb/APaCk1eRsNW31yijSjS1hFm5Rd4rdTew=="
az container create --resource-group Myrg \
--name project-test-db \
--image mcr.microsoft.com/mssql/server:2019-latest \
--location westus2 \
--ports 1433 \
--memory 5 \
--environment-variables "SA_PASSWORD=$Password" "ACCEPT_EULA=Y" \
--ip-address public \
--azure-file-volume-account-name kteststoragee \
--azure-file-volume-account-key $key \
--azure-file-volume-share-name ktestfs2 \
--azure-file-volume-mount-path /databases
When you have a misbehaving container in Azure Container Instances, start by viewing its logs with az container logs, and stream its standard out and standard error with az container attach.
The az container attach command provides diagnostic information during container startup. Once the container has started, it streams STDOUT and STDERR to your local console.
Refer: Quickstart: Run SQL Server container images with Docker and Docker run command fails with Accept-Eula Agreement error #199

SQLServer Docker: How do I backup & restore the data *volume*? [duplicate]

This question already has answers here:
How should I backup & restore docker named volumes
(4 answers)
Closed 3 years ago.
I have a MS SQLServer 2017 Linux Docker container running with docker-compose. (Working on
a Windows host.)
The server is running, I added data, and this data is persistent across multiple docker-compose up / down
since the server uses a docker volume. The data disappears when I use docker-compose down -v. So this works as intended:
services:
sql:
image: mcr.microsoft.com/mssql/server:2017-GA-ubuntu
volumes:
- sqldata:/var/opt/mssql
...
volumes:
sqldata:
driver: local
name: sqldata
Now I am trying to backup & restore the database. I know the "normal" way, using the SQLServer directly. This works:
# Restore a backup inside the container volume
docker exec -it sql mkdir /var/opt/mssql/backup
docker cp .\Test.bak sql:/var/opt/mssql/backup
sqlcmd -S 127.0.0.1,1433 -U sa -P Secr3tSA_Passw0rd -H 127.0.0.1,1433 -Q "RESTORE DATABASE [Test] FROM DISK='/var/opt/mssql/backup/Test.bak' WITH REPLACE"
# Backup a database inside the container volume, then copy to local file
docker exec sql rm -rf /var/opt/mssql/backup/Test.bak
sqlcmd -S 127.0.0.1,1433 -U sa -P Secr3tSA_Passw0rd -H 127.0.0.1,1433 -Q "BACKUP DATABASE [Test] TO DISK='/var/opt/mssql/backup/Test.bak'"
docker cp sql:/var/opt/mssql/backup/Test.bak .\Test.bak
Now I was thinking, maybe there is a better way than to put the SA password into a BAT file
and hand that out to my customers and service technicians. Simply grabbing a copy of the volume
should do the trick!
I found this:
# Make sure the SQLServer is not writing/blocking any files.
docker-compose stop sql
# Backup & Restore the sqldata volume.
docker run --rm -v sqldata -v $pwd\backup:/backup ubuntu bash -c "cd /whsqldata && tar xvf /backup/backup.tar --strip 1"
docker run --rm -v sqldata -v $pwd\backup:/backup ubuntu bash -c "cd /whsqldata && tar cvf /backup/backup.tar ."
# Restart the SQLServer.
docker-compose start sql
This creates the expected backup.tar in my user directory... But it is suspiciously small! And after the
restore, the SQLServer cannot connect to the database. It looks like the backup.tar has no content.
But on closer inspection, so has my sqldata volume! It is empty!? When I start a bash that mounts that
same volume, I can see the directory but there is nothing in it:
docker run --rm -v sqldata -it ubuntu
/ # ls sqldata/ -a
. ..
/ #
The SQLServer´s data persists. So it´s got to be saved somewhere, right? What am I missing?!
OK, after reading the answers to How should I backup & restore docker named volumes I found out that my mistake was in how I mounted the volume. Instead of -v sqldata I have to write -v sqldata:/sqldata. Also changed some paths in my commands.
The completed commands are:
# Backup the data volume
docker run --rm \
-v sqldata:/sqldata \
-v $pwd\:/backup \
ubuntu tar cvf /backup/backup.tar /sqldata
# Remove existing data volume (clear up old data, if exists)
docker volume rm sqldata
# Restore the data volume
docker run --rm \
-v sqldata:/sqldata \
-v $pwd\:/backup \
ubuntu tar xvf /backup/backup.tar -C sqldata --strip 1

Docker and SQL Server Linux - Error 9002. The transaction log for database master is full due to NOTHING

I use Docker without Hyper-V with VirtualBox and Docker VM on Windows 10 Home edition.
I have the following Docker build file:
FROM repositoryname/mssql-server-linux:test-db
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
# start sql, setup db
RUN /opt/mssql/bin/sqlservr & sleep 15s && \
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P pass -d master -i /usr/src/app/setup_db_1.sql && \
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P pass -d master -i /usr/src/app/setup__db_2.sql
Right now MS SQL Server fails during startup with the following error:
Error 9002. The transaction log for database master is full due to NOTHING
Is there anything I can do (for example add some instructions to my Docker build file) in order to prevent this error?
Also, I found the similar topic here https://social.msdn.microsoft.com/Forums/en-US/ca65a3e2-2f30-4641-a7ea-d3998c8dd8a7/the-transaction-log-for-database-master-is-full-due-to-nothing-during-updade?forum=sqlsetupandupgrade but unfortunately without the proper answer right now.

Starting and populating a Postgres container in Docker

I have a Docker container that contains my Postgres database. It's using the official Postgres image which has a CMD entry that starts the server on the main thread.
I want to populate the database by running RUN psql –U postgres postgres < /dump/dump.sql before it starts listening to queries.
I don't understand how this is possible with Docker. If I place the RUN command after CMD, it will of course never be reached because Docker has finished reading the Dockerfile. But if I place it before the CMD, it will run before psql even exists as a process.
How can I prepopulate a Postgres database in Docker?
After a lot of fighting, I have found a solution ;-)
For me was very useful a comment posted here: https://registry.hub.docker.com/_/postgres/ from "justfalter"
Anyway, I have done in this way:
# Dockerfile
FROM postgres:9.4
RUN mkdir -p /tmp/psql_data/
COPY db/structure.sql /tmp/psql_data/
COPY scripts/init_docker_postgres.sh /docker-entrypoint-initdb.d/
db/structure.sql is a sql dump, useful to initialize the first tablespace.
Then, the init_docker_postgres.sh
#!/bin/bash
# this script is run when the docker container is built
# it imports the base database structure and create the database for the tests
DATABASE_NAME="db_name"
DB_DUMP_LOCATION="/tmp/psql_data/structure.sql"
echo "*** CREATING DATABASE ***"
# create default database
gosu postgres postgres --single <<EOSQL
CREATE DATABASE "$DATABASE_NAME";
GRANT ALL PRIVILEGES ON DATABASE "$DATABASE_NAME" TO postgres;
EOSQL
# clean sql_dump - because I want to have a one-line command
# remove indentation
sed "s/^[ \t]*//" -i "$DB_DUMP_LOCATION"
# remove comments
sed '/^--/ d' -i "$DB_DUMP_LOCATION"
# remove new lines
sed ':a;N;$!ba;s/\n/ /g' -i "$DB_DUMP_LOCATION"
# remove other spaces
sed 's/ */ /g' -i "$DB_DUMP_LOCATION"
# remove firsts line spaces
sed 's/^ *//' -i "$DB_DUMP_LOCATION"
# append new line at the end (suggested by #Nicola Ferraro)
sed -e '$a\' -i "$DB_DUMP_LOCATION"
# import sql_dump
gosu postgres postgres --single "$DATABASE_NAME" < "$DB_DUMP_LOCATION";
echo "*** DATABASE CREATED! ***"
So finally:
# no postgres is running
[myserver]# psql -h 127.0.0.1 -U postgres
psql: could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
[myserver]# docker build -t custom_psql .
[myserver]# docker run -d --name custom_psql_running -p 5432:5432 custom_psql
[myserver]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce4212697372 custom_psql:latest "/docker-entrypoint. 9 minutes ago Up 9 minutes 0.0.0.0:5432->5432/tcp custom_psql_running
[myserver]# psql -h 127.0.0.1 -U postgres
psql (9.2.10, server 9.4.1)
WARNING: psql version 9.2, server version 9.4.
Some psql features might not work.
Type "help" for help.
postgres=#
# postgres is now initialized with the dump
Hope it helps!
For those who want to initialize a PostgreSQL DB with millions of records during the first run.
Import using *.sql dump
You can do simple sql dump and copy the dump.sql file into /docker-entrypoint-initdb.d/. The problem is speed. My dump.sql script is about 17MB (small DB - 10 tables with 100k rows in only one of them) and the initialization takes over a minute (!). That is unacceptable for local development / unit test, etc.
Import using binary dump
The solution is to make a binary PostgreSQL dump and use shell scripts initialization support.
Then the same DB is initialized in about 500ms instead of 1 minute.
1. Create the dump.pgdata binary dump of a DB named "my-db"
directly from within a container or your local DB
pg_dump -U postgres --format custom my-db > "dump.pgdata"
Or from host from running container (postgres-container)
docker exec postgres-container pg_dump -U postgres --format custom my-db > "dump.pgdata"
2. Create a Docker image with a given dump and initialization script
$ tree
.
├── Dockerfile
└── docker-entrypoint-initdb.d
├── 01-restore.sh
├── 02-small-updates.sql
└── dump.pgdata
$ cat Dockerfile
FROM postgres:11
COPY ./docker-entrypoint-initdb.d/ /docker-entrypoint-initdb.d/
$ cat docker-entrypoint-initdb.d/01-restore.sh
#!/bin/bash
file="/docker-entrypoint-initdb.d/dump.pgdata"
dbname=my-db
echo "Restoring DB using $file"
pg_restore -U postgres --dbname=$dbname --verbose --single-transaction < "$file" || exit 1
$ cat docker-entrypoint-initdb.d/02-small-updates.sql
-- some updates on your DB, for example for next application version
-- this file will be executed on DB during next release
UPDATE ... ;
3. Build an image and run it
$ docker build -t db-test-img .
$ docker run -it --rm --name db-test db-test-img
Alternatively, you can just mount a volume to /docker-entrypoint-initdb.d/ that contains all your DDL scripts. You can put in *.sh, *.sql, or *.sql.gz files and it will take care of executing those on start-up.
e.g. (assuming you have your scripts in /tmp/my_scripts)
docker run -v /tmp/my_scripts:/docker-entrypoint-initdb.d postgres
There is yet another option available that utilises Flocker:
Flocker is a container data volume manager that is designed to allow databases like PostgreSQL to easily run in containers in production. When running a database in production, you have to think about things like recovering from host failure. Flocker provides tools for managing data volumes across a cluster of machines like you have in a production environment. For example, as a Postgres container is scheduled between hosts in response to server failure, Flocker can automatically move its associated data volume between hosts at the same time. This means that when your Postgres container starts up on a new host, it has its data. This operation can be accomplished manually using the Flocker API or CLI, or automatically by a container orchestration tool that Flocker is integrates with, for example Docker Swarm, Kubernetes or Mesos.
I Followed the same solution which #damoiser , The only situation which was different was I wanted to import all dump data.
Please follow the solution below.(I have not done any kind of checks)
Dockerfile
FROM postgres:9.5
RUN mkdir -p /tmp/psql_data/
COPY db/structure.sql /tmp/psql_data/
COPY scripts/init_docker_postgres.sh /docker-entrypoint-initdb.d/
then the init_docker_postgres.sh script
#!/bin/bash
DB_DUMP_LOCATION="/tmp/psql_data/structure.sql"
echo "*** CREATING DATABASE ***"
psql -U postgres < "$DB_DUMP_LOCATION";
echo "*** DATABASE CREATED! ***"
and then you can build your image as
docker build -t abhije***/postgres-data .
docker run -d abhije***/postgres-data
My solution is inspired by Alex Dguez's answer which unfortunately doesn't work for me because:
I used pg-9.6 base image, and the RUN /docker-entrypoint.sh --help never ran through for me, which always complained with The command '/bin/sh -c /docker-entrypoint.sh -' returned a non-zero code: 1
I don't want to pollute the /docker-entrypoint-initdb.d dir
The following answer is originally from my reply in another post: https://stackoverflow.com/a/59303962/4440427. It should be noted that the solution is for restoring from a binary dump instead of from a plain SQL as asked by the OP. But it can be modified slightly to adapt to the plain SQL case
Dockerfile:
FROM postgres:9.6.16-alpine
LABEL maintainer="lu#cobrainer.com"
LABEL org="Cobrainer GmbH"
ARG PG_POSTGRES_PWD=postgres
ARG DBUSER=someuser
ARG DBUSER_PWD=P#ssw0rd
ARG DBNAME=sampledb
ARG DB_DUMP_FILE=example.pg
ENV POSTGRES_DB launchpad
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD ${PG_POSTGRES_PWD}
ENV PGDATA /pgdata
COPY wait-for-pg-isready.sh /tmp/wait-for-pg-isready.sh
COPY ${DB_DUMP_FILE} /tmp/pgdump.pg
RUN set -e && \
nohup bash -c "docker-entrypoint.sh postgres &" && \
/tmp/wait-for-pg-isready.sh && \
psql -U postgres -c "CREATE USER ${DBUSER} WITH SUPERUSER CREATEDB CREATEROLE ENCRYPTED PASSWORD '${DBUSER_PWD}';" && \
psql -U ${DBUSER} -d ${POSTGRES_DB} -c "CREATE DATABASE ${DBNAME} TEMPLATE template0;" && \
pg_restore -v --no-owner --role=${DBUSER} --exit-on-error -U ${DBUSER} -d ${DBNAME} /tmp/pgdump.pg && \
psql -U postgres -c "ALTER USER ${DBUSER} WITH NOSUPERUSER;" && \
rm -rf /tmp/pgdump.pg
HEALTHCHECK --interval=30s --timeout=30s --start-period=5s --retries=3 \
CMD pg_isready -U postgres -d launchpad
where the wait-for-pg-isready.sh is:
#!/bin/bash
set -e
get_non_lo_ip() {
local _ip _non_lo_ip _line _nl=$'\n'
while IFS=$': \t' read -a _line ;do
[ -z "${_line%inet}" ] &&
_ip=${_line[${#_line[1]}>4?1:2]} &&
[ "${_ip#127.0.0.1}" ] && _non_lo_ip=$_ip
done< <(LANG=C /sbin/ifconfig)
printf ${1+-v} $1 "%s${_nl:0:$[${#1}>0?0:1]}" $_non_lo_ip
}
get_non_lo_ip NON_LO_IP
until pg_isready -h $NON_LO_IP -U "postgres" -d "launchpad"; do
>&2 echo "Postgres is not ready - sleeping..."
sleep 4
done
>&2 echo "Postgres is up - you can execute commands now"
The above scripts together with a more detailed README are available at https://github.com/cobrainer/pg-docker-with-restored-db
I was able to load the data in by pre-pending the run command in the docker file with /etc/init.d/postgresql. My docker file has the following line which is working for me:
RUN /etc/init.d/postgresql start && /usr/bin/psql -a < /tmp/dump.sql
We for E2E test in which we need a database with structure and data already saved in the Docker image we have done the following:
Dockerfile:
FROM postgres:9.4.24-alpine
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD postgres
ENV PGDATA /pgdata
COPY database.backup /tmp/
COPY database_restore.sh /docker-entrypoint-initdb.d/
RUN /docker-entrypoint.sh --help
RUN rm -rf /docker-entrypoint-initdb.d/database_restore.sh
RUN rm -rf /tmp/database.backup
database_restore.sh:
#!/bin/sh
set -e
pg_restore -C -d postgres /tmp/database.backup
To create the image:
docker build .
To start the container:
docker run --name docker-postgres -d -p 5432:5432 <Id-docker-image>
This does not restore the database every time the container is booted. The structure and data of the database is already contained in the created Docker image.
We have based on this article, but eliminating the multistage:
Creating Fast, Lightweight Testing Databases in Docker
Edit: With version 9.4-alpine does not work now because it does not
run the database_restore.sh scrips. Use version 9.4.24-alpine
My goal was to have an image that contains the database - i. e. saving the time to rebuild it everytime I do docker run oder docker-compose up.
We would just have to manage to get the line exec "$#" out of docker-entrypoint.sh. So I added into my Dockerfile:
#Copy my ssql scripts into the image to /docker-entrypoint-initdb.d:
COPY ./init_db /docker-entrypoint-initdb.d
#init db
RUN grep -v 'exec "$#"' /usr/local/bin/docker-entrypoint.sh > /tmp/docker-entrypoint-without-serverstart.sh && \
chmod a+x /tmp/docker-entrypoint-without-serverstart.sh && \
/tmp/docker-entrypoint-without-serverstart.sh postgres && \
rm -rf /docker-entrypoint-initdb.d/* /tmp/docker-entrypoint-without-serverstart.sh

Backup/Restore a dockerized PostgreSQL database

I'm trying to backup/restore a PostgreSQL database as is explained on the Docker website, but the data is not restored.
The volumes used by the database image are:
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
and the CMD is:
CMD ["/usr/lib/postgresql/9.3/bin/postgres", "-D", "/var/lib/postgresql/9.3/main", "-c", "config_file=/etc/postgresql/9.3/main/postgresql.conf"]
I create the DB container with this command:
docker run -it --name "$DB_CONTAINER_NAME" -d "$DB_IMAGE_NAME"
Then I connect another container to insert some data manually:
docker run -it --rm --link "$DB_CONTAINER_NAME":db "$DB_IMAGE_NAME" sh -c 'exec bash'
psql -d test -h $DB_PORT_5432_TCP_ADDR
# insert some data in the db
<CTRL-D>
<CTRL-D>
The tar archive is then created:
$ sudo docker run --volumes-from "$DB_CONTAINER_NAME" --rm -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /etc/postgresql /var/log/postgresql /var/lib/postgresql
Now I remove the container used for the db and create another one, with the same name, and try to restore the data inserted before:
$ sudo docker run --volumes-from "$DB_CONTAINER_NAME" --rm -v $(pwd):/backup ubuntu tar xvf /backup/backup.tar
But the tables are empty, why is the data not properly restored ?
Backup your databases
docker exec -t your-db-container pg_dumpall -c -U postgres > dump_`date +%d-%m-%Y"_"%H_%M_%S`.sql
Restore your databases
cat your_dump.sql | docker exec -i your-db-container psql -U postgres
Backup Database
generate sql:
docker exec -t your-db-container pg_dumpall -c -U your-db-user > dump_$(date +%Y-%m-%d_%H_%M_%S).sql
to reduce the size of the sql you can generate a compress:
docker exec -t your-db-container pg_dumpall -c -U your-db-user | gzip > ./dump_$(date +"%Y-%m-%d_%H_%M_%S").gz
Restore Database
cat your_dump.sql | docker exec -i your-db-container psql -U your-db-user -d your-db-name
to restore a compressed sql:
gunzip < your_dump.sql.gz | docker exec -i your-db-container psql -U your-db-user -d your-db-name
PD: this is a compilation of what worked for me, and what I got from here and elsewhere. I am beginning to make contributions, any feedback will be appreciated.
I think you can also use a postgres backup container which would backup your databases within a given time duration.
pgbackups:
container_name: Backup
image: prodrigestivill/postgres-backup-local
restart: always
volumes:
- ./backup:/backups
links:
- db:db
depends_on:
- db
environment:
- POSTGRES_HOST=db
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_EXTRA_OPTS=-Z9 --schema=public --blobs
- SCHEDULE=#every 0h30m00s
- BACKUP_KEEP_DAYS=7
- BACKUP_KEEP_WEEKS=4
- BACKUP_KEEP_MONTHS=6
- HEALTHCHECK_PORT=81
cat db.dump | docker exec ... way didn't work for my dump (~2Gb). It took few hours and ended up with out-of-memory error.
Instead, I cp'ed dump into container and pg_restore'ed it from within.
Assuming that container id is CONTAINER_ID and db name is DB_NAME:
# copy dump into container
docker cp local/path/to/db.dump CONTAINER_ID:/db.dump
# shell into container
docker exec -it CONTAINER_ID bash
# restore it from within
pg_restore -U postgres -d DB_NAME --no-owner -1 /db.dump
Okay, I've figured this out. Postgresql does not detect changes to the folder /var/lib/postgresql once it's launched, at least not the kind of changes I want it do detect.
The first solution is to start a container with bash instead of starting the postgres server directly, restore the data, and then start the server manually.
The second solution is to use a data container. I didn't get the point of it before, now I do.
This data container allows to restore the data before starting the postgres container. Thus, when the postgres server starts, the data are already there.
The below command can be used to take dump from docker postgress container
docker exec -t <postgres-container-name> pg_dump --no-owner -U <db-username> <db-name> > file-name-to-backup-to.sql
The top answer didn't work for me. I kept getting this error:
psql: error: FATAL: Peer authentication failed for user "postgres"
To get it to work I had to specify a user for the docker container:
Backup
docker exec -t --user postgres your-db-container pg_dumpall -c -U postgres > dump_`date +%d-%m-%Y"_"%H_%M_%S`.sql
Restore
cat your_dump.sql | docker exec -i --user postgres your-db-container psql -U postgres
Another approach (based on docker-postgresql-workflow)
Local running database (not in docker, but same approach would work) to export:
pg_dump -F c -h localhost mydb -U postgres export.dmp
Container database to import:
docker run -d -v /local/path/to/postgres:/var/lib/postgresql/data postgres #ex runs container as `CONTAINERNAME` #find via `docker ps`
docker run -it --link CONTAINERNAME:postgres --volume $PWD/:/tmp/ postgres bash -c 'exec pg_restore -h postgres -U postgres -d mydb -F c /tmp/sonar.dmp'
I had this issue while trying to use a db_dump to restore a db. I normally use dbeaver to restore- however received a psql dump, so had to figure out a method to restore using the docker container.
The methodology recommended by Forth and edited by Soviut worked for me:
cat your_dump.sql | docker exec -i your-db-container psql -U postgres -d dbname
(since this was a single db dump and not multiple db's i included the name)
However, in order to get this to work, I had to also go into the virtualenv that the docker container and project were in. This eluded me for a bit before figuring it out- as I was receiving the following docker error.
read unix #->/var/run/docker.sock: read: connection reset by peer
This can be caused by the file /var/lib/docker/network/files/local-kv.db .I don't know the accuracy of this statement: but I believe I was seeing this as I do not user docker locally, so therefore did not have this file, which it was looking for, using Forth's answer.
I then navigated to correct directory (with the project) activated the virtualenv and then ran the accepted answer. Boom, worked like a top. Hope this helps someone else out there!
dksnap (https://github.com/kelda/dksnap) automates the process of running pg_dumpall and loading the dump via /docker-entrypoint-initdb.d.
It shows you a list of running containers, and you pick which one you want to backup. The resulting artifact is a regular Docker image, so you can then docker run it, or share it by pushing it to a Docker registry.
(disclaimer: I'm a maintainer on the project)
This is the command worked for me.
cat your_dump.sql | sudo docker exec -i {docker-postgres-container} psql -U {user} -d {database_name}
for example
cat table_backup.sql | docker exec -i 03b366004090 psql -U postgres -d postgres
Reference: Solution given by GMartinez-Sisti in this discussion.
https://gist.github.com/gilyes/525cc0f471aafae18c3857c27519fc4b
Solution for docker-compose users:
At First run the docker-compose file by any on of following commands: $ docker-compose -f loca.yml up OR docker-compose -f loca.yml up -d
For taking backup: $ docker-compose -f local.yml exec postgres backup
To see list of backups inside container: $ docker-compose -f local.yml exec postgres backups
Open another terminal and run following command: $ docker ps
Look for the CONTAINER ID of postgres image and copy the ID. Let's assume the CONTAINER ID is: ba78c0f9bcee
Now to bring that backup into your local file system, run the following command: $ docker cp ba78c0f9bcee:/backups ./local_backupfolder
Hope this will help someone who was lost just like me..
N.B: The full details of this solution can be found here.
Another way to do it is to run the pg_restore (of course if you have postgres set up in your host machine) command from the host machine.
Assuming that you have port mapping "5436:5432" for the postgres service in your docker-compose file. Having this port mapping will let you access the container's postgres (running on port 5432) via your host machine's port 5436
pg_restore -h localhost -p 5436 -U <POSTGRES_USER> -d <POSTGRES_DB> /Path/to/the/.psql/file/in/your/host_machine
This way you do not have to dive into the container's terminal or copy the dump file to the container.
I would like to add the official docker documentation for backups and restores. This applies to all kinds of data within a volume, not just postegres.
Backup a container
Create a new container named dbstore:
$ docker run -v /dbdata --name dbstore ubuntu /bin/bash
Then in the next command, we:
Launch a new container and mount the volume from the dbstore container
Mount a local host directory as /backup
Pass a command that tars the contents of the dbdata volume to a backup.tar file inside our /backup directory.
$ docker run --rm --volumes-from dbstore -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata
When the command completes and the container stops, we are left with a backup of our dbdata volume.
Restore container from backup
With the backup just created, you can restore it to the same container, or another that you made elsewhere.
For example, create a new container named dbstore2:
$ docker run -v /dbdata --name dbstore2 ubuntu /bin/bash
Then un-tar the backup file in the new container`s data volume:
$ docker run --rm --volumes-from dbstore2 -v $(pwd):/backup ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1"
You can use the techniques above to automate backup, migration and restore testing using your preferred tools.
Using a File System Level Backup on Docker Volumes
Example Docker Compose
version: "3.9"
services:
db:
container_name: pg_container
image: platerecognizer/parkpow-postgres
# restart: always
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: admin
POSTGRES_DB: admin
volumes:
postgres_data:
Backup Postgresql Volume
docker run --rm \
--user root \
--volumes-from pg_container \
-v /tmp/db-bkp:/backup \
ubuntu tar cvf /backup/db.tar /var/lib/postgresql/data
Then copy /tmp/db-bkp to second host
Restore Postgresql Volume
docker run --rm \
--user root \
--volumes-from pg_container \
-v /tmp/db-bkp:/backup \
ubuntu bash -c "cd /var && tar xvf /backup/db.tar --strip 1"

Resources