I'm trying to create a database in docker and have the .mdf and .ldf files be created on a mounted volume so they're not lost when the container shuts down. I followed Microsoft's instructions and added the volume to their example command like so:
docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=<MY_PASSWORD>' \
-p 1433:1433 --name sql1 \
-v ~/mssql/data:/var/data \
-d mcr.microsoft.com/mssql/server:2017-latest
Here's the SQL used to create the database and the files:
CREATE DATABASE [MY_DB]
CONTAINMENT = NONE
ON PRIMARY
( NAME = N'MY_DB', FILENAME = N'/var/data/MY_DB.mdf' , SIZE = 204800KB , MAXSIZE = UNLIMITED, FILEGROWTH = 65536KB )
LOG ON
( NAME = N'MY_DB_log', FILENAME = N'/var/data/MY_DB_log.ldf' , SIZE = 860160KB , MAXSIZE = 2048GB , FILEGROWTH = 65536KB )
I know this currently works if you're on a Windows machine running everything without docker because that's how we currently do it. However, when I try to execute the above SQL from inside the container (by connecting with a RDBMS), I get the error:
CREATE FILE encountered operating system error 2(The system cannot find the file specified.) while attempting to open or create the physical file '/var/data/SVT_MVP.mdf'
I tried execing into the container and running the SQL as root, but I get the same error. For funsies I even tried to chmod the mapped host directory to 0777 but no dice.
Another note is that everything works perfectly fine if there is no volume mounted. At this point I'm questioning if the volume part is worth it or even needed. I'm pretty new to docker.
For those curious, mounting volumes on macos has had issues since 2017. Instead you have to create a data volume (named volume):
docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=<MY_PASSWORD>' \
-p 1433:1433 --name sql1 \
-v sqlvolume:/var/opt/mssql/data \
-d mcr.microsoft.com/mssql/server:2017-latest
Thank you to Larnu who pointed me in the right direction.
Related
I would like to create an SQL Server schema dump from a local server that I can run from within an sql server docker container to create an empty database ready for integration tests.
I generate the scripts via SQL Server Management Studio. I right click the database > Tasks > Generate script and set Types of data to script to "Schema only".
I copy the sql files across as part of my docker-compose.yml and when I run the sql dump from within the docker container I get
Directory lookup for the file
"C:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\DATA\mydatabase.mdf"
failed with the operating system error 2(The system cannot find the file specified.).
the part of the script that's causing the above is...
CREATE DATABASE [mydatabase]
CONTAINMENT = NONE
ON PRIMARY
( NAME = N'mydatabase', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\DATA\mydatabase.mdf' , SIZE = 8192KB , MAXSIZE = UNLIMITED, FILEGROWTH = 65536KB )
LOG ON
( NAME = N'mydatabase_log', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\DATA\mydatabase_log.ldf' , SIZE = 8192KB , MAXSIZE = 2048GB , FILEGROWTH = 65536KB )
WITH CATALOG_COLLATION = DATABASE_DEFAULT
GO
If I go through the same generation process from an azure hosted database the create code generated is...
CREATE DATABASE [mydatabase]
GO
and if I manually run the above from within the container, it creates as expected.
So what do I need to do to generate an SQL schema dump that doesn't include the FILENAME property of the create command?
Following #Jeroen Mostert's suggestion I generate the scripts without the CREATE DATABASE part, this is done by selecting "specific database objects" instead of "entire database and all database objects"
I then run a script I manually wrote that creates all the required databases, followed by running the scripts that create the tables and procedures against each database.
docker-compose.yml
version: "3.8"
services:
db:
image: mcr.microsoft.com/mssql/server:2019-latest
environment:
- ACCEPT_EULA=true
- SA_PASSWORD=localdevpassword#123
volumes:
- ./sql:/scripts/
ports:
- "1434:1433"
command: /bin/bash -c /scripts/init.sh
init.sh
#!/bin/bash
echo "Starting SQL server"
/opt/mssql/bin/sqlservr &
echo "Sleeping for 10 seconds"
sleep 10
echo "Done sleeping, run the SQL scripts"
cd /scripts/
echo "Creating databases using file databases.sql"
/opt/mssql-tools/bin/sqlcmd -U sa -P $SA_PASSWORD -l 30 -e -i databases.sql
for FILENAME in *.sql; do
DATABASE="${FILENAME%.*}"
if [ "$DATABASE" != "databases" ]; then
echo "Running import for $DATABASE using file $FILENAME"
/opt/mssql-tools/bin/sqlcmd -U sa -P $SA_PASSWORD -d $DATABASE -l 30 -e -i $FILENAME
fi
done
echo "SQL scripts run, sleep forever to keep container running"
sleep infinity
I'm trying to create a container with SQL Server running on with this command:
docker run -e "ACCEPT_EULA=Y" -e MSSQL_SA_PASSWORD="MyPassword1"
-e MSSQL_COLLATION="Polish_CI_AS" -p 1434:1433
-v C:/Users/User1/sql-server/data:/var/opt/mssql/data
-d mcr.microsoft.com/mssql/server:2019-latest
Everything is working fine, the env variable is set but the server collation is still the default - SQL_Latin1_General_CP1_CI_AS.
Any ideas?
Note that at the time of this writing, MSSQL_COLLATION only works when initializing the server for the first time (i.e. creating the master database). In order to change the server collation:
Create backups of all user databases.
Stop the old container.
Start container with -e MSSQL_COLLATION=Polish_CI_AS. Ensure the data volume is empty except for the backups.
Restore from the backups.
The following docker file creates a custom SQL server image with a database restored from a backup (rmsdev.bak).
FROM mcr.microsoft.com/mssql/server:2019-latest
ENV MSSQL_PID=Developer
ENV SA_PASSWORD=Password1?
ENV ACCEPT_EULA=Y
USER mssql
COPY rmsdev.bak /var/opt/mssql/backup/
# Launch SQL Server, confirm startup is complete, restore the database, then terminate SQL Server.
RUN ( /opt/mssql/bin/sqlservr & ) | grep -q "Service Broker manager has started" \
&& /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P $SA_PASSWORD -Q 'RESTORE DATABASE rmsdev FROM DISK = "/var/opt/mssql/backup/rmsdev.bak" WITH MOVE "rmsdev" to "/var/opt/mssql/data/rmsdev.mdf", MOVE "rmsdev_Log" to "/var/opt/mssql/data/rmsdev_log.ldf", NOUNLOAD, STATS = 5' \
&& pkill sqlservr
CMD ["/opt/mssql/bin/sqlservr"]
The issue is that, once the restore is complete, the backup file is not required anymore and I would like to remove it from the image.
Unfortunately, due to how docker images are formed (layers) I cannot simply 'rm' the file as I would like to.
Multistage Dockerfile is not easily applicable in this case as in a build scenario.
Another way would be to run the container, restore the backup and then commit a new image, but what I am looking to do is to use only docker build with the proper Dockerfile.
Does anyone know a way?
If you know where the data directory is in the image, and the image does not declare that directory as a VOLUME, then you can use a multi-stage build for this. The first stage would set up the data directory as you show. The second stage would copy the populated data directory from the first stage but not the backup file. This trick might depend on the two stages running identical builds of the underlying software.
For SQL Server, the Docker Hub page and GitHub repo are both tricky to find, and surprisingly neither talks to the issue of data storage (as #HansKillian notes in a comment, you would almost always want to store the database data in some sort of volume). The GitHub repo does include a Helm chart built around a Kubernetes StatefulSet and from that we can discover that a data directory would be mounted on /var/opt/mssql.
So I might write a multi-stage build like so:
# Put common setup steps in an initial stage
FROM mcr.microsoft.com/mssql/server:2019-latest AS setup
ENV MSSQL_PID=Developer
ENV SA_PASSWORD=Password1? # (weak password, easily extracted with `docker inspect`)
ENV ACCEPT_EULA=Y # (legally probably the end user needs to accept this not the image builder)
# Have a stage specifically to populate the data directory
FROM setup AS data
# (copy-and-pasted from the question)
USER mssql
COPY rmsdev.bak / # not under /var/opt/mssql
RUN ( /opt/mssql/bin/sqlservr & ) | grep -q "Service Broker manager has started" \
&& /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P $SA_PASSWORD -Q 'RESTORE DATABASE rmsdev FROM DISK = "/rmsdev.bak" WITH MOVE "rmsdev" to "/var/opt/mssql/data/rmsdev.mdf", MOVE "rmsdev_Log" to "/var/opt/mssql/data/rmsdev_log.ldf", NOUNLOAD, STATS = 5' \
&& pkill sqlservr
# Final stage that actually will actually be run.
FROM setup
# Copy the prepopulated data tree, but not the backup file
COPY --from=data /var/opt/mssql /var/opt/mssql
# Use the default USER, CMD, etc. from the base SQL Server image
The standard Docker Hub open-source database images like mysql and postgres generally declare a VOLUME in their Dockerfile for the database data, which forces the data to be stored in a volume. The important thing this means is that you can't set up data in the image like this; you have to populate the data externally, and then copy the data tree outside of the Docker image system.
I'm using MacOS Sierra with the latest version of the mssql docker file for linux.
I had built a database which grew to a size of ~69 GB. I started getting an error "Could not allocate a new page for database because of insufficient disk space in filegroup". I attempted to solve this problem by running this code:
USE [master]
GO
ALTER DATABASE [db]
MODIFY FILE ( NAME = N'db', FILEGROWTH = 512MB )
GO
ALTER DATABASE [db]
MODIFY FILE
(NAME = N'db_log', FILEGROWTH = 256MB )
GO
After doing this, I was no longer able to startup the the mssql container. I then manually replaced a backup copy of the container folder which in MacOs is called "com.docker.docker" and which contained the prior working version of the database.
After doing this, I stated getting the following error: "The extended event engine has been disabled by startup options. Features that depend on extended events may fail to start."
At this point I re-installed the docker container using the procedure mentioned in this post. the command I used was:
docker create -v /var/opt/mssql --name mssql microsoft/mssql-server-linux /bin/true
docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Test#123' -p 1433:1433 --volumes-from mssql -d --name sql-server microsoft/mssql-server-linux
Although now I'm able to start the server with the new container, I would like restore the original SQL server database (~69 GB). I tried doing so by again manually copying the file named "Docker.qcow2" into the docker container folder. This is obviously not working.
How can I restore my database?
Sorry for my ignorance. I have influx db running on docker with docker-compose as below.
influxdb:
image: influxdb:alpine
ports:
- 8086:8086
volumes:
- ./influxdb/config/influxdb.conf:/etc/influxdb/influxdb.conf:ro
- ./influxdb/data:/var/lib/influxdb
I need to restore the backup of a database from remote server to this Influxdb container. I have taken the backup as below from remote server.
influxd backup -database tech_db /tmp/tech_db
I read the documentation and couldn't find a way to restore the DB to docker container.Can anyone give me a pointer to how to do this.
I have also had the same issue. Looks like it is impossible because you are not able to kill influxd process in a container.
# Restoring a backup requires that influxd is stopped (note that stopping the process kills the container).
docker stop "$CONTAINER_ID"
# Run the restore command in an ephemeral container.
# This affects the previously mounted volume mapped to /var/lib/influxdb.
docker run --rm \
--entrypoint /bin/bash \
-v "$INFLUXDIR":/var/lib/influxdb \
-v "$BACKUPDIR":/backups \
influxdb:1.3 \
-c "influxd restore -metadir /var/lib/influxdb/meta -datadir /var/lib/influxdb/data -database foo /backups/foo.backup"
# Start the container just like before, and get the new container ID.
CONTAINER_ID=$(docker run --rm \
--detach \
-v "$INFLUXDIR":/var/lib/influxdb \
-v "$BACKUPDIR":/backups \
-p 8086 \
influxdb:1.3
)
More information is here