I am working on a MacBook Pro with M1 CPU so I can't use the "normal" mssql docker image. I am using azure-sql-edge that doesn't have sqlcmd to initialize the database (create schema, database, login).
I have created a sql script that I would like to run once the container starts but I can't find any alternative to sqlcmd.
Is there any other way to do it?
I had same issue, I used mssql-tools docker image from Microsoft registry.
Sample docker-compose:
---
version: '3.8'
services:
mssql:
image: mcr.microsoft.com/azure-sql-edge:latest
command: /opt/mssql/bin/sqlservr
environment:
ACCEPT_EULA: "Y"
SA_PASSWORD: "SA_Passw0rd"
stdin_open: true
ports:
- 1433:1433
networks:
- db_net
sqlcmd:
image: mcr.microsoft.com/mssql-tools:latest
command: /opt/mssql_scripts/run-initialization.sh
stdin_open: true
volumes:
- ./mssql_scripts:/opt/mssql_scripts
networks:
- db_net
networks:
db_net:
name: db_net
To use this docker-compose you need to have a shell script named run-initialization.sh with execute rights inside mssql_scripts folder.
The run-initialization.sh script waits for database to start up and then execute sql commands:
/opt/mssql-tools/bin/sqlcmd -S mssql -U SA -P SA_Passw0rd -d master -Q "SELECT version()"
or if you want to execute from test.sql file:
/opt/mssql-tools/bin/sqlcmd -S mssql -U SA -P SA_Passw0rd -d master -i /opt/mssql_scripts/test.sql
The solution above worked for me using Mac M1 chip, don't need to create a shell script can run the commands direct.
sqlcmd:
image: mcr.microsoft.com/mssql-tools:latest
stdin_open: true
environment:
- MSSQL_SA_PASSWORD=Xxx
- MSSQL_DATABASE=test
- MSSQL_BACKUP="/opt/mssql/test.bak"
volumes:
- ./test_data.bak:/opt/mssql/test.bak
command: /bin/bash -c '/opt/mssql-tools/bin/sqlcmd -S mssql -U sa -P $$MSSQL_SA_PASSWORD -d tempdb -q "EXIT(RESTORE DATABASE $$MSSQL_DATABASE FROM DISK = $$MSSQL_BACKUP)"; wait;'
mssql:
image: mcr.microsoft.com/azure-sql-edge:latest
environment:
- ACCEPT_EULA=Y
- MSSQL_SA_PASSWORD=Xxxx
- MSSQL_DATABASE=test
- MSSQL_SLEEP=7
ports:
- 1433:1433
Since I am starting a new project I looked into this issue again and found a good solution for me.
I found go-sqlcmd, a new implementation of sqlcmd using golang and it's compatible with M1 chips.
So I am running azure-sql-edge as before using docker compose:
version: "3.9"
services:
mssql:
image: mcr.microsoft.com/azure-sql-edge:latest
command: /opt/mssql/bin/sqlservr
environment:
ACCEPT_EULA: "Y"
SA_PASSWORD: ${DATABASE_SA_PASSWORD}
stdin_open: true
ports:
- 1433:1433
When the database container is up and in idle I run this bash script (in my case I am reading the environmnet variables from a .NET appsettings.json file):
cat <appsetting.json> | jq -r 'to_entries|map("\(.key)=\(.value|tostring)")|.[]' > temp
# Show env vars
grep -v '^#' temp
# Export env vars
export $(grep -v '^#' temp | xargs)
export SQLCMDPASSWORD=$DATABASE_SA_PASSWORD
sqlcmd -U sa \
-v DATABASE_SCHEMA=$DATABASE_SCHEMA \
-v DATABASE_DB_NAME=$DATABASE_DB_NAME \
-v DATABASE_LOGIN_NAME=$DATABASE_LOGIN_NAME \
-v DATABASE_LOGIN_PASSWORD=$DATABASE_LOGIN_PASSWORD \
-i sql/init-db.sql,sql/init-user.sql
I had to split the database and schema creation in a script, then I create the user and assign it to the database.
The sql scripts, init-db.sql:
USE master
IF NOT EXISTS (SELECT name FROM sys.schemas WHERE name = N'$(DATABASE_SCHEMA)')
BEGIN
EXEC sys.sp_executesql N'CREATE SCHEMA [$(DATABASE_SCHEMA)] AUTHORIZATION [dbo]'
END
IF NOT EXISTS (SELECT name FROM sys.databases WHERE name = N'$(DATABASE_DB_NAME)')
BEGIN
CREATE DATABASE $(DATABASE_DB_NAME)
END
init-user.sql:
USE $(DATABASE_DB_NAME)
IF NOT EXISTS(SELECT principal_id FROM sys.server_principals WHERE name = '$(DATABASE_LOGIN_NAME)') BEGIN
CREATE LOGIN $(DATABASE_LOGIN_NAME)
WITH PASSWORD = '$(DATABASE_LOGIN_PASSWORD)'
END
IF NOT EXISTS(SELECT principal_id FROM sys.database_principals WHERE name = '$(DATABASE_LOGIN_NAME)') BEGIN
CREATE USER $(DATABASE_LOGIN_NAME) FOR LOGIN $(DATABASE_LOGIN_NAME)
END
I create my MSSQL database docker container with only docker-compose.yml file and setup.sql file. My .yml file looks like that:
version: "3.7"
services:
sql-server-db:
container_name: sql-server-db
image: mcr.microsoft.com/mssql/server:2019-latest
ports:
- "1433:1433"
environment:
SA_PASSWORD: "secret123new!"
ACCEPT_EULA: "Y"
volumes:
- ./data/mssql:/scripts/
command:
- /bin/bash
- -c
- |
# Launch MSSQL and send to background
/opt/mssql/bin/sqlservr &
pid=$$!
# Wait for it to be available
echo "Waiting for MS SQL to be available ⏳"
/opt/mssql-tools/bin/sqlcmd -l 30 -S localhost -h-1 -V1 -U sa -P secret123new! -Q "SET NOCOUNT ON SELECT \"YAY WE ARE UP\" , ##servername"
is_up=$$?
while [ $$is_up -ne 0 ] ; do
echo -e $$(date)
/opt/mssql-tools/bin/sqlcmd -l 30 -S localhost -h-1 -V1 -U sa -P secret123new! -Q "SET NOCOUNT ON SELECT \"YAY WE ARE UP\" , ##servername"
is_up=$$?
sleep 5
done
# Run every script in /scripts
# TODO set a flag so that this is only done once on creation,
# and not every time the container runs
cd /scripts
for foo in /scripts/"*.sql"
do echo "Processing $foo";
done
echo "All scripts have been executed. Waiting for MS SQL(pid $$pid) to terminate."
# Wait on the sqlserver process
wait $$pi
when i try to connect with Dbeaver to set in file user credentials there in an error looks like that:
Login failed for user 'system'. ClientConnectionId:17e53706-8242-4bca-974b-3648b5ba7f13
there in an extra warning:
The foo variable is not set. Defaulting to a blank string
setup.sql file in /scripts folder mounted on localhost directory mssql on desktop:
CREATE DATABASE probna
GO
USE probna
GO
CREATE LOGIN system WITH PASSWORD='system'
GO
CREATE USER system FOR LOGIN system
GO
ALTER ROLE [db_owner] ADD MEMBER system
GO
CREATE TABLE Products (ID int, ProductName nvarchar(max))
GO
sql-server-db | ServiAll scripts have been executed. Waiting for MS SQL(pid 8) to terminate.
sql-server-db | ce Broker endpoint is in disabled or stopped state.
What could be wrong with this ? my setup.sql file is in /scripts folder. I don t have another files. Should i replace some of the code or some $$ lines ? Please give me some tips how to start docker mssql database with startup file on windows without node.js :) Have a nice day !
I'm trying to run a setup script on a Docker SQL Server image
For this I have created a Dockerfile from the mssql image
FROM microsoft/mssql-server-linux:2017-CU8
# Create directory to place app specific files
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Copy setup scripts
COPY entrypoint.sh \
./
RUN chmod +x ./entrypoint.sh
CMD /bin/bash ./entrypoint.sh
In entrypoint.sh I'm starting SQL Server and I want to run some setup commands.
#!/bin/bash
#start SQL Server
/opt/mssql/bin/sqlservr &
echo 'Sleeping 20 seconds before running setup script'
sleep 20s
echo 'Starting setup script'
#run the setup script to create the DB and the schema in the DB
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P <MyPassWd> -d master -i setup.sql
echo 'Finished setup script'
When I run this script, the database starts, the setup runs, and after the
setup is finished, the container shuts down.
So I thought something in the script makes the container shut down, therefore I stripped the script down to a bare minimum
#!/bin/bash
#start SQL Server
/opt/mssql/bin/sqlservr &
echo 'Sleeping 20 seconds before running setup script'
sleep 20s
That also stops the container after sleep 20s finished.
Moving on...
#!/bin/bash
#start SQL Server
/opt/mssql/bin/sqlservr &
Which stops the container right away
And then...
#!/bin/bash
#start SQL Server
/opt/mssql/bin/sqlservr
Now the container runs, but I can't do any initialization
Does someone know how to get this working?
Change the password of the sql server to be complex enough.
docker run -d -p 1433:1433 -e "sa_password=ComplexPW2019!" -e "ACCEPT_EULA=Y" <sqlserverimageid>
Root cause of this issue is PID 1 allocation for docker container.
PID 1 will be allocated to command given in CMD in Dockerfile (in our case ./entrypoint.sh)
Container has a life spam according to PID 1(as soon as PID 1 is stop/killed container will be stopped)
1) In case of /opt/mssql/bin/sqlservr &
a child process ID will be allocated to sqlserver cmd and will be executed in background and as soon as rest of the script is executed, container will stop.
2) In case of /opt/mssql/bin/sqlservr
script will not proceed from here until this execution will complete.
so the solution is to assign PID 1 to CMD /opt/mssql/bin/sqlservr and rest of the script should be executed as child process.
I have done below changes and it is working for me.
in Dockerfile
replace CMD /bin/bash ./entrypoint.sh to CMD exec /bin/bash entrypoint.sh
in entrypoint.sh
#!/bin/bash
#start SQL Server
sh -c "
echo 'Sleeping 20 seconds before running setup script'
sleep 20s
echo 'Starting setup script'
#run the setup script to create the DB and the schema in the DB
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P \"YourStrong!Passw0rd\" -Q
\"ALTER LOGIN SA WITH PASSWORD='NewStrong!Passw0rd'\"
echo 'Finished setup script'
exit
" &
exec /opt/mssql/bin/sqlservr
To create the database on startup, try the approach below.
Dockerfile
FROM mcr.microsoft.com/mssql/server:2019-latest
ENV ACCEPT_EULA Y
ENV DB_NAME test
COPY startup.sh /var/opt/mssql/startup.sh
CMD ["bash", "/var/opt/mssql/startup.sh"]
startup.sh
#!/usr/bin/env bash
if ! [ -f /var/opt/mssql/.initialized ] && [ -n "$DB_NAME" ]; then
while ! </dev/tcp/localhost/1433 2>/dev/null; do
sleep 2
done
echo "Creating $DB_NAME database..."
/opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "$SA_PASSWORD" -d master \
-Q "CREATE DATABASE $DB_NAME"
touch /var/opt/mssql/.initialized
fi &
/opt/mssql/bin/sqlservr
SQL Server has to be the right most command.
I know it does not make sense as you want SQL Server to run first and then run your scripts to create/restore databases. I guess this is because of the way SQL Server runs on Linux ( Sql server process creates a SQL server process as part of startup).
MSDN documentation makes the order of execution clear at: https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-configure-docker?view=sql-server-ver15#customcontainer
So for your example, you would have to write something like:
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P <MyPassWd> -d master -i setup.sql & /opt/mssql/bin/sqlservr
There is a pull request to allow to run an init SQL script on first time run.
The purpose of this PR is to add into the start.ps1 the ability to check the folder docker-entrypoint-initdb and run all sql scripts inside. Once it is done, the script creates a flag file to avoid running the setup phase on the next startup after a stop.
By mounting a volume from local folder scripts to c:/docker-entrypoint-initdb, the container will execute all .sql scripts files. The volume should be a directory.
E.g.
version: "3.8"
services:
sqlserver:
platform: windows/amd64
environment:
- sa_password=<YourPassword>
- ACCEPT_EULA=Y
image: microsoft/mssql-server-windows-developer
volumes:
- ./dockerfiles/sqlserver/initdb:c:/docker-entrypoint-initdb:ro
ports:
- "1433:1433"
I need sql server container with some database. I've prepared the following dockerfile:
FROM microsoft/mssql-server-linux:latest
ENV ACCEPT_EULA Y
ENV SA_PASSWORD yourStrong(!)Password
WORKDIR sqlserver
COPY load_db.sh load_db.sh
COPY /resources/sql/ sql
ENTRYPOINT ./load_db.sh
So it runs load_db.sh:
#!/usr/bin/env bash
SERVER_OUT=/var/log/sqlserver.out
TIMEOUT=90
/opt/mssql/bin/sqlservr &>${SERVER_OUT} &
function server_ready() {
grep -q -F 'Recovery is complete.' ${SERVER_OUT}
}
echo 'Wait until Microsoft SQL Server is up'
for (( i=0; i<${TIMEOUT}; i++ )); do
sleep 1
if server_ready; then
break
fi
done
echo 'Microsoft SQL Server is up'
/opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P ${SA_PASSWORD} -Q "CREATE DATABASE MarketDataService;"
/opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P ${SA_PASSWORD} -d MarketDataService -i sql/2018.0100.000000001.Add_Layout_table.sql
To run it I've prepared the followinf docker-compose file:
services:
db:
build:
context: .
dockerfile: Dockerfile-db
ports:
- 1433:1433
When I try to run docker-compose up --build it looks okay.
I have output:
db_1 | Wait until Microsoft SQL Server is up
db_1 | Microsoft SQL Server is up
db_1 |
db_1 | (3 rows affected)
But after it is exited...
amptest_db_1 exited with code 0
To solve it I've tried to add tty: true but it doesn't make sense. The same output. How can I keep alive my container by docker-compose? Here I've found tail -F anything. It works but looks terrible. Is there a better way?
Upd: I've stayed with tail -F '/var/log/sqlserver.out'
I'm trying to backup/restore a PostgreSQL database as is explained on the Docker website, but the data is not restored.
The volumes used by the database image are:
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
and the CMD is:
CMD ["/usr/lib/postgresql/9.3/bin/postgres", "-D", "/var/lib/postgresql/9.3/main", "-c", "config_file=/etc/postgresql/9.3/main/postgresql.conf"]
I create the DB container with this command:
docker run -it --name "$DB_CONTAINER_NAME" -d "$DB_IMAGE_NAME"
Then I connect another container to insert some data manually:
docker run -it --rm --link "$DB_CONTAINER_NAME":db "$DB_IMAGE_NAME" sh -c 'exec bash'
psql -d test -h $DB_PORT_5432_TCP_ADDR
# insert some data in the db
<CTRL-D>
<CTRL-D>
The tar archive is then created:
$ sudo docker run --volumes-from "$DB_CONTAINER_NAME" --rm -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /etc/postgresql /var/log/postgresql /var/lib/postgresql
Now I remove the container used for the db and create another one, with the same name, and try to restore the data inserted before:
$ sudo docker run --volumes-from "$DB_CONTAINER_NAME" --rm -v $(pwd):/backup ubuntu tar xvf /backup/backup.tar
But the tables are empty, why is the data not properly restored ?
Backup your databases
docker exec -t your-db-container pg_dumpall -c -U postgres > dump_`date +%d-%m-%Y"_"%H_%M_%S`.sql
Restore your databases
cat your_dump.sql | docker exec -i your-db-container psql -U postgres
Backup Database
generate sql:
docker exec -t your-db-container pg_dumpall -c -U your-db-user > dump_$(date +%Y-%m-%d_%H_%M_%S).sql
to reduce the size of the sql you can generate a compress:
docker exec -t your-db-container pg_dumpall -c -U your-db-user | gzip > ./dump_$(date +"%Y-%m-%d_%H_%M_%S").gz
Restore Database
cat your_dump.sql | docker exec -i your-db-container psql -U your-db-user -d your-db-name
to restore a compressed sql:
gunzip < your_dump.sql.gz | docker exec -i your-db-container psql -U your-db-user -d your-db-name
PD: this is a compilation of what worked for me, and what I got from here and elsewhere. I am beginning to make contributions, any feedback will be appreciated.
I think you can also use a postgres backup container which would backup your databases within a given time duration.
pgbackups:
container_name: Backup
image: prodrigestivill/postgres-backup-local
restart: always
volumes:
- ./backup:/backups
links:
- db:db
depends_on:
- db
environment:
- POSTGRES_HOST=db
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_EXTRA_OPTS=-Z9 --schema=public --blobs
- SCHEDULE=#every 0h30m00s
- BACKUP_KEEP_DAYS=7
- BACKUP_KEEP_WEEKS=4
- BACKUP_KEEP_MONTHS=6
- HEALTHCHECK_PORT=81
cat db.dump | docker exec ... way didn't work for my dump (~2Gb). It took few hours and ended up with out-of-memory error.
Instead, I cp'ed dump into container and pg_restore'ed it from within.
Assuming that container id is CONTAINER_ID and db name is DB_NAME:
# copy dump into container
docker cp local/path/to/db.dump CONTAINER_ID:/db.dump
# shell into container
docker exec -it CONTAINER_ID bash
# restore it from within
pg_restore -U postgres -d DB_NAME --no-owner -1 /db.dump
Okay, I've figured this out. Postgresql does not detect changes to the folder /var/lib/postgresql once it's launched, at least not the kind of changes I want it do detect.
The first solution is to start a container with bash instead of starting the postgres server directly, restore the data, and then start the server manually.
The second solution is to use a data container. I didn't get the point of it before, now I do.
This data container allows to restore the data before starting the postgres container. Thus, when the postgres server starts, the data are already there.
The below command can be used to take dump from docker postgress container
docker exec -t <postgres-container-name> pg_dump --no-owner -U <db-username> <db-name> > file-name-to-backup-to.sql
The top answer didn't work for me. I kept getting this error:
psql: error: FATAL: Peer authentication failed for user "postgres"
To get it to work I had to specify a user for the docker container:
Backup
docker exec -t --user postgres your-db-container pg_dumpall -c -U postgres > dump_`date +%d-%m-%Y"_"%H_%M_%S`.sql
Restore
cat your_dump.sql | docker exec -i --user postgres your-db-container psql -U postgres
Another approach (based on docker-postgresql-workflow)
Local running database (not in docker, but same approach would work) to export:
pg_dump -F c -h localhost mydb -U postgres export.dmp
Container database to import:
docker run -d -v /local/path/to/postgres:/var/lib/postgresql/data postgres #ex runs container as `CONTAINERNAME` #find via `docker ps`
docker run -it --link CONTAINERNAME:postgres --volume $PWD/:/tmp/ postgres bash -c 'exec pg_restore -h postgres -U postgres -d mydb -F c /tmp/sonar.dmp'
I had this issue while trying to use a db_dump to restore a db. I normally use dbeaver to restore- however received a psql dump, so had to figure out a method to restore using the docker container.
The methodology recommended by Forth and edited by Soviut worked for me:
cat your_dump.sql | docker exec -i your-db-container psql -U postgres -d dbname
(since this was a single db dump and not multiple db's i included the name)
However, in order to get this to work, I had to also go into the virtualenv that the docker container and project were in. This eluded me for a bit before figuring it out- as I was receiving the following docker error.
read unix #->/var/run/docker.sock: read: connection reset by peer
This can be caused by the file /var/lib/docker/network/files/local-kv.db .I don't know the accuracy of this statement: but I believe I was seeing this as I do not user docker locally, so therefore did not have this file, which it was looking for, using Forth's answer.
I then navigated to correct directory (with the project) activated the virtualenv and then ran the accepted answer. Boom, worked like a top. Hope this helps someone else out there!
dksnap (https://github.com/kelda/dksnap) automates the process of running pg_dumpall and loading the dump via /docker-entrypoint-initdb.d.
It shows you a list of running containers, and you pick which one you want to backup. The resulting artifact is a regular Docker image, so you can then docker run it, or share it by pushing it to a Docker registry.
(disclaimer: I'm a maintainer on the project)
This is the command worked for me.
cat your_dump.sql | sudo docker exec -i {docker-postgres-container} psql -U {user} -d {database_name}
for example
cat table_backup.sql | docker exec -i 03b366004090 psql -U postgres -d postgres
Reference: Solution given by GMartinez-Sisti in this discussion.
https://gist.github.com/gilyes/525cc0f471aafae18c3857c27519fc4b
Solution for docker-compose users:
At First run the docker-compose file by any on of following commands: $ docker-compose -f loca.yml up OR docker-compose -f loca.yml up -d
For taking backup: $ docker-compose -f local.yml exec postgres backup
To see list of backups inside container: $ docker-compose -f local.yml exec postgres backups
Open another terminal and run following command: $ docker ps
Look for the CONTAINER ID of postgres image and copy the ID. Let's assume the CONTAINER ID is: ba78c0f9bcee
Now to bring that backup into your local file system, run the following command: $ docker cp ba78c0f9bcee:/backups ./local_backupfolder
Hope this will help someone who was lost just like me..
N.B: The full details of this solution can be found here.
Another way to do it is to run the pg_restore (of course if you have postgres set up in your host machine) command from the host machine.
Assuming that you have port mapping "5436:5432" for the postgres service in your docker-compose file. Having this port mapping will let you access the container's postgres (running on port 5432) via your host machine's port 5436
pg_restore -h localhost -p 5436 -U <POSTGRES_USER> -d <POSTGRES_DB> /Path/to/the/.psql/file/in/your/host_machine
This way you do not have to dive into the container's terminal or copy the dump file to the container.
I would like to add the official docker documentation for backups and restores. This applies to all kinds of data within a volume, not just postegres.
Backup a container
Create a new container named dbstore:
$ docker run -v /dbdata --name dbstore ubuntu /bin/bash
Then in the next command, we:
Launch a new container and mount the volume from the dbstore container
Mount a local host directory as /backup
Pass a command that tars the contents of the dbdata volume to a backup.tar file inside our /backup directory.
$ docker run --rm --volumes-from dbstore -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata
When the command completes and the container stops, we are left with a backup of our dbdata volume.
Restore container from backup
With the backup just created, you can restore it to the same container, or another that you made elsewhere.
For example, create a new container named dbstore2:
$ docker run -v /dbdata --name dbstore2 ubuntu /bin/bash
Then un-tar the backup file in the new container`s data volume:
$ docker run --rm --volumes-from dbstore2 -v $(pwd):/backup ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1"
You can use the techniques above to automate backup, migration and restore testing using your preferred tools.
Using a File System Level Backup on Docker Volumes
Example Docker Compose
version: "3.9"
services:
db:
container_name: pg_container
image: platerecognizer/parkpow-postgres
# restart: always
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: admin
POSTGRES_DB: admin
volumes:
postgres_data:
Backup Postgresql Volume
docker run --rm \
--user root \
--volumes-from pg_container \
-v /tmp/db-bkp:/backup \
ubuntu tar cvf /backup/db.tar /var/lib/postgresql/data
Then copy /tmp/db-bkp to second host
Restore Postgresql Volume
docker run --rm \
--user root \
--volumes-from pg_container \
-v /tmp/db-bkp:/backup \
ubuntu bash -c "cd /var && tar xvf /backup/db.tar --strip 1"