Docker postgres starting automation can not create database due to error - database

I want to create a shell script which automates the creation and running and creating database in a postgres database using docker.
I want to use the docker postgres official package for postgres in docker.
The script that I use is as follows:
docker network create --subnet=172.18.0.0/16 shared_network;
docker kill postgres_linker;
docker rm postgres_linker;
docker run --name postgres_linker -e POSTGRES_PASSWORD=blahblahblah -d --net shared_network --ip 172.18.0.2 postgres:10-alpine;
docker exec -it postgres_linker psql -U postgres -c "create database linker;";
But when I run this I get the following output without any database being created:
Error response from daemon: network with name shared_network already exists
postgres_linker
postgres_linker
b2a9fd4d6e25b62d60adb05c8b6b653a1b55ec7a869c4728677d6289f5cddd63
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
The first line of this log is OK, the second and third are too, The problem is that psql does not run the command on the postgres container althogh the command I am trying to run is correct. If I run the last command seperately from my shell script:
docker exec -it postgres_linker psql -U postgres -c "create database linker;";
It does not give me error! and it works!
Why is this behavior happening?

I found the solution to my problem.
Unfortunately I thought that when postgres container is run the server is up and running immediately.
It was not true and it takes some time to come up. so I have needed to add some delay.
So the resulting script file should look like this:
docker network create --subnet=172.18.0.0/16 shared_network;
docker kill postgres_linker;
docker rm postgres_linker;
docker run --name postgres_linker -e POSTGRES_PASSWORD=blahblahblah -d --net shared_network --ip 172.18.0.2 postgres:10-alpine;
sleep 5;
docker exec -it postgres_linker psql -U postgres -c "create database linker;";

Related

Docker container exec errors out

I have a Dockerfile that builds a SQL Server image. I am copying a SQL create table script into the image and an execs.sh that invokes this SQL script using docker exec -it "execs.sh".
What I am seeing is that it errors out if I invoke execs.sh using "docker exec -it" but if I shell into the container and run execs.sh it works. What am I doing wrong?
Dockerfile:
FROM microsoft/mssql-server-linux
COPY create_tables.sql .
COPY execs.sh .
# set environment variables
ENV MSSQL_SA_PASSWORD=P#ssw0rd
ENV ACCEPT_EULA=Y
execs.sh contents (this gets invoked from docker exec -it):
echo "Executing SQL scripts"
/opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'P#ssw0rd' -i ./create_tables.sql
Script build container, run it, and execute commands:
docker build -t mssql_test .
docker run -p 1433:1433 --name mssql2 -d mssql_test
docker exec -it mssql2 "/execs.sh"
docker exec -it mssql2 "ls"
Output:
OCI runtime exec failed: exec failed: container_linux.go:349: starting container process
caused "exec format error": unknown
bin create_tables.sql execs.sh lib mnt root srv usr
boot dev home lib64 opt run sys var
etc install.sh media proc sbin tmp
You're missing an interpreter at the top of the script. When you run that from a shell like bash or sh, it assumes you want to run it as a shell script with the current shell. But when you run it with an OS exec there's no shell assuming how you want to run it. Therefore you want the following in the first line of your script:
#!/bin/sh
And be sure that's done with Linux linefeeds, not Windows linefeeds, else it will give you a file not found.
The problem is, that the command which your are try to force when the docker container starts (in your case: ./create_tables.sql) will be run at the same second the docker starts.
The SQL Server hasnt started yet, so the sql file could not even run against this service.
You have to work with either a great waiting time within your script or build a loop (e.g. check every second if the SQL server is reachable 100 times before exit).

Problem Optimizing Docker Container Start With SqlServer on MacBook OSX

I'm a Docker newbie and have managed to create simple steps to create, start and load an image running sqlserver with a database backup. For me, it's three steps now.
docker pull mcr.microsoft.com/mssql/server:2019-latest
docker run --name SQL19c -p 1433:1433 -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=xxxx" -v /Users/useraccount/sql:/sql -d mcr.microsoft.com/mssql/server:2019-latest
docker exec -it SQL19c /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'xxxx' -Q 'RESTORE DATABASE svcodecamp FROM DISK = "/sql/sv-small-2019.bak" WITH MOVE "361684_codecamp08_dat" TO "/var/opt/mssql/ata/codecamp08_dat.mdf", MOVE "361684_codecamp08_log" TO "/var/opt/mssql/data/codecamp08_log.mdf"'
This is on my macbook running OSX which I reboot frequently so I need to do this everytime I need to use SqlServer.
Questions with this:
1) Each time I do this, I have to increment the SQL19c to Sql19d (or next letter of alphabet) because I get error saying name in use. How to re-use same name?
2) If I rm the container, it needs to repull the full image (1gig). I need to just start it and reload the data, not pull the full image
3) Is there a more optimum way to start SqlServer and load the data without using too much of my battery every time I reboot my computer or restart docker?
(notice my backup file is on a docker share so I don't need to recopy that in)
This is because you already have a container with this name. Try to execute the command:
docker container list -a
If you repull the image I think you remove the image NOT the container for remove the container you must run
docker container rm SQL19x
The way is to restart the container, drop database and then restore the database
Run ONCE: This create a SQL19x container
docker pull mcr.microsoft.com/mssql/server:2019-latest
docker run --name SQL19x -p 1433:1433 -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=xxxx" -v /Users/useraccount/sql:/sql -d mcr.microsoft.com/mssql/server:2019-latest
Now each time you restart the machine you must run the command below to start the container and restart the database.
docker container start SQL19x
docker exec -it SQL19x /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'xxxx' -Q 'DROP DATABASE svcodecamp'
docker exec -it SQL19x /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'xxxx' -Q 'RESTORE DATABASE svcodecamp FROM DISK = "/sql/sv-small-2019.bak" WITH MOVE "361684_codecamp08_dat" TO "/var/opt/mssql/ata/codecamp08_dat.mdf", MOVE "361684_codecamp08_log" TO "/var/opt/mssql/data/codecamp08_log.mdf"'
If you want to have a clean shutdown before to poweroff your machine execute
docker container stop SQL19x

Failed to connect to docker hosted MSSQL

I have a problem followng [this tutorial](https://hub.docker.com/r/microsoft/mssql-server-linux/
) where I try to connect to my docker hosted MSSQL via sqlcmd.
I executed the following in PowerShell from windows:
docker run -e 'ACCEPT_EULA=Y' --name mssql -e \
'SA_PASSWORD=yourStrong(!)Password' -p 1433:1433 -it \
-d microsoft/mssql-server-linux:latest /bin/bash
Note: "-it" and "/bin/bash" is added due to docker will be stopped automatically if there is no any activity detected.
I ran docker container ls -a to verify it is running:
docker container Is -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
92cfc504ab70 microsoft/mssql-server-linux:latest "/bin/bash" 27 minutes ago Up 27 minutes 0.0.0.0:1433->1433/tcp mssql
I ran telnet local-ip:1433 on my host, it is working fine.
Problem lies when I do the following:
docker exec -it mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U sa \
-P yourStrong(!)Password
Error:
Sqlcmd: Error: Microsoft ODBC Driver 17 for SQL Server : Login timeout
expired. Sqlcmd: Error: Microsoft ODBC Driver 17 for SQL Server : TCP
Provider: Error code 0x2749. Sqlcmd: Error: Microsoft ODBC Driver 17
for SQL Server : A network-related or instance-specific error has
occurred while establishing a connection to SQL Server. Server is not
found or not accessible. Check if instance name is correct and if SQL
Server is configured to allow remote connections. For more information
see SQL Server Books Online..
I also tried to connect in using powershell via my host
Link:https://learn.microsoft.com/en-us/sql/linux/quickstart-install-connect-docker
Command:
sqlcmd -S 192.168.0.110,1433 -U SA -P yourStrong(!)Password
Note: 192.168.0.110(got this from running ipconfig in host machine.)
Any help ?
I found out the problems after some trials and errors, and re-reading the documents. I should use double quotes for the arguments when I executed my command in PowerShell.
I was looking into the wrong direction. Initially I executed the command:
docker run -e 'ACCEPT_EULA=Y' --name mssql -e \
'SA_PASSWORD=yourStrong(!)Password' -p 1433:1433 -d \
microsoft/mssql-server-linux:latest
Container stopped automatically by itself every time it starts.
Then, I did some googling and found:
docker run -e 'ACCEPT_EULA=Y' --name mssql -e \
'SA_PASSWORD=yourStrong(!)Password' -p 1433:1433 -it -d \
microsoft/mssql-server-linux:latest /bin/bash
It seemed fine on the surface. It got executed successfully in PowerShell. It didn't stop automatically anymore.If I dig deeper using
docker container logs mssql
to see the log for mssql. No error given, just that I don't see a lots of info given, which led me to think that there were no problems in my command.
But the right way to run these commands is using double quotes.
Link: https://hub.docker.com/r/microsoft/mssql-server-linux/
IMPORTANT NOTE: If you are using PowerShell on Windows to run these commands use double quotes instead of single quotes.
E.g.
docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=YourStrong!Passw0rd" -p 1401:1433 --name sql1 -d microsoft/mssql-server-linux:2017-latest
I am also able to login using SSMS with:
Server name: Hostip,1401
Username: sa
Password:yourpassword
Try 127.0.0.1 or 0.0.0.0 instead of localhost
For example :
docker exec -it mssql /opt/mssql-tools/bin/sqlcmd -S 127.0.0.1 -U sa -P 'yourStrong(!)Password'
docker run command syntax is the following:
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
When you execute the command:
docker run -e 'ACCEPT_EULA=Y' --name mssql -e 'SA_PASSWORD=yourStrong(!)Password' -p 1433:1433 -it -d microsoft/mssql-server-linux:latest /bin/bash
/bin/bash in the end overrides CMD layer defined in the Dockerfile of microsoft/mssql-server-linux image.
So, just start a container without any additional command in the end:
$ docker run -e 'ACCEPT_EULA=Y' --name mssql -e 'SA_PASSWORD=yourStrong(!)Password' -p 1433:1433 -it -d microsoft/mssql-server-linux:latest
And now you are able to access a MSSQL:
$ docker exec -it mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P 'yourStrong(!)Password'
1>
I'm new to Docker and I also I had the same issue when I try to connect to the SQL Server container from my application(or sqlcmd app container from Microsoft) which is also running in another Docker container. It looks like each container gets its own subnet IP address, so 'localhost' would never work if you're trying to connect to the SQL from another container.
The command below will give you the full list of IP addresses in the bridge network. You can specify the IP directly in the connection string.
docker network inspect bridge
From your message, it looks the server is not configured to access remotely. Can you follow the way mentioned below to enable it?
Using SSMS (SQL Server Management studio):
In Object Explorer, right-click a server and select Properties.
Click the Connections node.
Under Remote server connections, select the Allow remote connections to this server check box.
Thanks,
Ananda Kumar J.

Restore .sql.xz file to postgresql database in docker

First, I follow the postgresql instruction on docker:
docker run --name test-postgres
then i tried to restore my db by the following command:
xz -dc matrix-ww.sql.xz | docker exec -i test-postgres psql -U postgres matrix --set ON_ERROR_STOP=on --single-transaction
the command seems executed. however, when i check the database and its tables, the database is still empty.

Backup/Restore a dockerized PostgreSQL database

I'm trying to backup/restore a PostgreSQL database as is explained on the Docker website, but the data is not restored.
The volumes used by the database image are:
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
and the CMD is:
CMD ["/usr/lib/postgresql/9.3/bin/postgres", "-D", "/var/lib/postgresql/9.3/main", "-c", "config_file=/etc/postgresql/9.3/main/postgresql.conf"]
I create the DB container with this command:
docker run -it --name "$DB_CONTAINER_NAME" -d "$DB_IMAGE_NAME"
Then I connect another container to insert some data manually:
docker run -it --rm --link "$DB_CONTAINER_NAME":db "$DB_IMAGE_NAME" sh -c 'exec bash'
psql -d test -h $DB_PORT_5432_TCP_ADDR
# insert some data in the db
<CTRL-D>
<CTRL-D>
The tar archive is then created:
$ sudo docker run --volumes-from "$DB_CONTAINER_NAME" --rm -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /etc/postgresql /var/log/postgresql /var/lib/postgresql
Now I remove the container used for the db and create another one, with the same name, and try to restore the data inserted before:
$ sudo docker run --volumes-from "$DB_CONTAINER_NAME" --rm -v $(pwd):/backup ubuntu tar xvf /backup/backup.tar
But the tables are empty, why is the data not properly restored ?
Backup your databases
docker exec -t your-db-container pg_dumpall -c -U postgres > dump_`date +%d-%m-%Y"_"%H_%M_%S`.sql
Restore your databases
cat your_dump.sql | docker exec -i your-db-container psql -U postgres
Backup Database
generate sql:
docker exec -t your-db-container pg_dumpall -c -U your-db-user > dump_$(date +%Y-%m-%d_%H_%M_%S).sql
to reduce the size of the sql you can generate a compress:
docker exec -t your-db-container pg_dumpall -c -U your-db-user | gzip > ./dump_$(date +"%Y-%m-%d_%H_%M_%S").gz
Restore Database
cat your_dump.sql | docker exec -i your-db-container psql -U your-db-user -d your-db-name
to restore a compressed sql:
gunzip < your_dump.sql.gz | docker exec -i your-db-container psql -U your-db-user -d your-db-name
PD: this is a compilation of what worked for me, and what I got from here and elsewhere. I am beginning to make contributions, any feedback will be appreciated.
I think you can also use a postgres backup container which would backup your databases within a given time duration.
pgbackups:
container_name: Backup
image: prodrigestivill/postgres-backup-local
restart: always
volumes:
- ./backup:/backups
links:
- db:db
depends_on:
- db
environment:
- POSTGRES_HOST=db
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_EXTRA_OPTS=-Z9 --schema=public --blobs
- SCHEDULE=#every 0h30m00s
- BACKUP_KEEP_DAYS=7
- BACKUP_KEEP_WEEKS=4
- BACKUP_KEEP_MONTHS=6
- HEALTHCHECK_PORT=81
cat db.dump | docker exec ... way didn't work for my dump (~2Gb). It took few hours and ended up with out-of-memory error.
Instead, I cp'ed dump into container and pg_restore'ed it from within.
Assuming that container id is CONTAINER_ID and db name is DB_NAME:
# copy dump into container
docker cp local/path/to/db.dump CONTAINER_ID:/db.dump
# shell into container
docker exec -it CONTAINER_ID bash
# restore it from within
pg_restore -U postgres -d DB_NAME --no-owner -1 /db.dump
Okay, I've figured this out. Postgresql does not detect changes to the folder /var/lib/postgresql once it's launched, at least not the kind of changes I want it do detect.
The first solution is to start a container with bash instead of starting the postgres server directly, restore the data, and then start the server manually.
The second solution is to use a data container. I didn't get the point of it before, now I do.
This data container allows to restore the data before starting the postgres container. Thus, when the postgres server starts, the data are already there.
The below command can be used to take dump from docker postgress container
docker exec -t <postgres-container-name> pg_dump --no-owner -U <db-username> <db-name> > file-name-to-backup-to.sql
The top answer didn't work for me. I kept getting this error:
psql: error: FATAL: Peer authentication failed for user "postgres"
To get it to work I had to specify a user for the docker container:
Backup
docker exec -t --user postgres your-db-container pg_dumpall -c -U postgres > dump_`date +%d-%m-%Y"_"%H_%M_%S`.sql
Restore
cat your_dump.sql | docker exec -i --user postgres your-db-container psql -U postgres
Another approach (based on docker-postgresql-workflow)
Local running database (not in docker, but same approach would work) to export:
pg_dump -F c -h localhost mydb -U postgres export.dmp
Container database to import:
docker run -d -v /local/path/to/postgres:/var/lib/postgresql/data postgres #ex runs container as `CONTAINERNAME` #find via `docker ps`
docker run -it --link CONTAINERNAME:postgres --volume $PWD/:/tmp/ postgres bash -c 'exec pg_restore -h postgres -U postgres -d mydb -F c /tmp/sonar.dmp'
I had this issue while trying to use a db_dump to restore a db. I normally use dbeaver to restore- however received a psql dump, so had to figure out a method to restore using the docker container.
The methodology recommended by Forth and edited by Soviut worked for me:
cat your_dump.sql | docker exec -i your-db-container psql -U postgres -d dbname
(since this was a single db dump and not multiple db's i included the name)
However, in order to get this to work, I had to also go into the virtualenv that the docker container and project were in. This eluded me for a bit before figuring it out- as I was receiving the following docker error.
read unix #->/var/run/docker.sock: read: connection reset by peer
This can be caused by the file /var/lib/docker/network/files/local-kv.db .I don't know the accuracy of this statement: but I believe I was seeing this as I do not user docker locally, so therefore did not have this file, which it was looking for, using Forth's answer.
I then navigated to correct directory (with the project) activated the virtualenv and then ran the accepted answer. Boom, worked like a top. Hope this helps someone else out there!
dksnap (https://github.com/kelda/dksnap) automates the process of running pg_dumpall and loading the dump via /docker-entrypoint-initdb.d.
It shows you a list of running containers, and you pick which one you want to backup. The resulting artifact is a regular Docker image, so you can then docker run it, or share it by pushing it to a Docker registry.
(disclaimer: I'm a maintainer on the project)
This is the command worked for me.
cat your_dump.sql | sudo docker exec -i {docker-postgres-container} psql -U {user} -d {database_name}
for example
cat table_backup.sql | docker exec -i 03b366004090 psql -U postgres -d postgres
Reference: Solution given by GMartinez-Sisti in this discussion.
https://gist.github.com/gilyes/525cc0f471aafae18c3857c27519fc4b
Solution for docker-compose users:
At First run the docker-compose file by any on of following commands: $ docker-compose -f loca.yml up OR docker-compose -f loca.yml up -d
For taking backup: $ docker-compose -f local.yml exec postgres backup
To see list of backups inside container: $ docker-compose -f local.yml exec postgres backups
Open another terminal and run following command: $ docker ps
Look for the CONTAINER ID of postgres image and copy the ID. Let's assume the CONTAINER ID is: ba78c0f9bcee
Now to bring that backup into your local file system, run the following command: $ docker cp ba78c0f9bcee:/backups ./local_backupfolder
Hope this will help someone who was lost just like me..
N.B: The full details of this solution can be found here.
Another way to do it is to run the pg_restore (of course if you have postgres set up in your host machine) command from the host machine.
Assuming that you have port mapping "5436:5432" for the postgres service in your docker-compose file. Having this port mapping will let you access the container's postgres (running on port 5432) via your host machine's port 5436
pg_restore -h localhost -p 5436 -U <POSTGRES_USER> -d <POSTGRES_DB> /Path/to/the/.psql/file/in/your/host_machine
This way you do not have to dive into the container's terminal or copy the dump file to the container.
I would like to add the official docker documentation for backups and restores. This applies to all kinds of data within a volume, not just postegres.
Backup a container
Create a new container named dbstore:
$ docker run -v /dbdata --name dbstore ubuntu /bin/bash
Then in the next command, we:
Launch a new container and mount the volume from the dbstore container
Mount a local host directory as /backup
Pass a command that tars the contents of the dbdata volume to a backup.tar file inside our /backup directory.
$ docker run --rm --volumes-from dbstore -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata
When the command completes and the container stops, we are left with a backup of our dbdata volume.
Restore container from backup
With the backup just created, you can restore it to the same container, or another that you made elsewhere.
For example, create a new container named dbstore2:
$ docker run -v /dbdata --name dbstore2 ubuntu /bin/bash
Then un-tar the backup file in the new container`s data volume:
$ docker run --rm --volumes-from dbstore2 -v $(pwd):/backup ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1"
You can use the techniques above to automate backup, migration and restore testing using your preferred tools.
Using a File System Level Backup on Docker Volumes
Example Docker Compose
version: "3.9"
services:
db:
container_name: pg_container
image: platerecognizer/parkpow-postgres
# restart: always
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: admin
POSTGRES_DB: admin
volumes:
postgres_data:
Backup Postgresql Volume
docker run --rm \
--user root \
--volumes-from pg_container \
-v /tmp/db-bkp:/backup \
ubuntu tar cvf /backup/db.tar /var/lib/postgresql/data
Then copy /tmp/db-bkp to second host
Restore Postgresql Volume
docker run --rm \
--user root \
--volumes-from pg_container \
-v /tmp/db-bkp:/backup \
ubuntu bash -c "cd /var && tar xvf /backup/db.tar --strip 1"

Resources