I need to restore a large SQL Server database on a Linux Docker instance (https://hub.docker.com/r/microsoft/mssql-server-linux/)
I'm moving my .bak file to the docker and executing this command in mssql shell:
RESTORE DATABASE gIMM_Brag FROM DISK = '/var/opt/mssql/backup/BackupFull8H_gIMM.bak' WITH MOVE '[gIMM].Data' T'/var/opt/mssql/data/gIMM.mdf', MOVE '[gIMM].Log' TO '/var/opt/mssql/data/gIMM.ldf', MOVE 'TraceabilityData' TO '/var/opt/mssql/data/gIMM.TraceData.mdf', MOVE 'TraceabilityIndexes' TO '/var/opt/mssql/data/gIMM.TraceIndex.mdf', MOVE 'KpiData' TO '/var/opt/mssql/data/gIMM.KpiData.mdf', MOVE 'KpiIndexes' TO '/var/opt/mssql/data/gIMM.KpiIndex.mdf'
I'm mapping correctly every file that need to and I definitely have enough space on the docker instance but I'm getting this error:
Error: The backup or restore was aborted.
The same error occurs with a windows version of this docker actually... And as it's not supposed to be a Express version, the database size shouldn't be the issue here.
If anyone has more information about what is causing this error !
Thanks,
#TOUDIdel
You have to use the actual file system paths on linux rather than the virtual paths that are shown in the error.
RESTORE DATABASE Northwind FROM DISK='/var/opt/mssql/Northwind.bak' WITH MOVE 'Northwind' TO '/var/opt/mssql/data/NORTHWND.MDF', MOVE 'Northwind_log' TO '/var/opt/mssql/data/NORTHWND_log.ldf'
http://www.raditha.com/blog/archives/restoring-a-database-on-ms-sql-server-for-linux-docker/
You didn't mention it, but the thing that tricked me up was that I wasn't copying the BAK file to my Docker instance. In Terminal with docker and your mssql container running...
1) get container ID:
$docker inspect -f '{{.Id}}' <container_name>
2) copy BAK file to docker instance:
docker exec -i <container_id> bash -c 'cat > /var/opt/mssql/backup.bak' < '/source/path/backup.bak'
3) log into mssql:
mssql -u sa -p 'myPassword'
3) restore db: (you can replace this with your restore script, though this was sufficient for me)
RESTORE DATABASE [MyDatabase] FROM DISK = N'/var/opt/mssql/backup.bak' WITH FILE = 1, NOUNLOAD, REPLACE, STATS = 5
When I had this problem, it's because the restore command was taking long enough for mssql to time out (with a totally unhelpful error message). Specifying a long timeout when connecting allowed the restore to complete. eg
mssql -s localhost -p "<sa_password>" -t 36000000 -T 36000000
I am not sure it is worth mentioning, but neither of the answers alone worked when moving a .bak made in Windows server to the docker running in Linux version.
(Note that I am using the code from the two previous answers and thus any credit should go to the below-mentioned authors)
TabsNotSpaces' solution was good until step 3 where the restore crashed with path mismatch (C:/path_to_mssql_server could not be found).
Vinicius Krauspenhar's answer was then necessary to remap the MDF and LOG files to fully complete the backup.
Thus the solution that worked for me when importing a windows-server-made .bak file into the Linux docker instance was:
In Terminal with docker and your SQL Server container running...
1) get container ID:
$docker inspect -f '{{.Id}}' <container_name>
2) copy BAK file to docker instance:
docker exec -i <container_id> bash -c 'cat > /var/opt/mssql/backup.bak' < '/source/path/backup.bak'
3) log into mssql or in any DB software and
RESTORE DATABASE Northwind FROM DISK='/var/opt/mssql/Northwind.bak' WITH MOVE 'Northwind' TO '/var/opt/mssql/data/NORTHWND.MDF', MOVE 'Northwind_log' TO '/var/opt/mssql/data/NORTHWND_log.ldf'
Related
The following docker file creates a custom SQL server image with a database restored from a backup (rmsdev.bak).
FROM mcr.microsoft.com/mssql/server:2019-latest
ENV MSSQL_PID=Developer
ENV SA_PASSWORD=Password1?
ENV ACCEPT_EULA=Y
USER mssql
COPY rmsdev.bak /var/opt/mssql/backup/
# Launch SQL Server, confirm startup is complete, restore the database, then terminate SQL Server.
RUN ( /opt/mssql/bin/sqlservr & ) | grep -q "Service Broker manager has started" \
&& /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P $SA_PASSWORD -Q 'RESTORE DATABASE rmsdev FROM DISK = "/var/opt/mssql/backup/rmsdev.bak" WITH MOVE "rmsdev" to "/var/opt/mssql/data/rmsdev.mdf", MOVE "rmsdev_Log" to "/var/opt/mssql/data/rmsdev_log.ldf", NOUNLOAD, STATS = 5' \
&& pkill sqlservr
CMD ["/opt/mssql/bin/sqlservr"]
The issue is that, once the restore is complete, the backup file is not required anymore and I would like to remove it from the image.
Unfortunately, due to how docker images are formed (layers) I cannot simply 'rm' the file as I would like to.
Multistage Dockerfile is not easily applicable in this case as in a build scenario.
Another way would be to run the container, restore the backup and then commit a new image, but what I am looking to do is to use only docker build with the proper Dockerfile.
Does anyone know a way?
If you know where the data directory is in the image, and the image does not declare that directory as a VOLUME, then you can use a multi-stage build for this. The first stage would set up the data directory as you show. The second stage would copy the populated data directory from the first stage but not the backup file. This trick might depend on the two stages running identical builds of the underlying software.
For SQL Server, the Docker Hub page and GitHub repo are both tricky to find, and surprisingly neither talks to the issue of data storage (as #HansKillian notes in a comment, you would almost always want to store the database data in some sort of volume). The GitHub repo does include a Helm chart built around a Kubernetes StatefulSet and from that we can discover that a data directory would be mounted on /var/opt/mssql.
So I might write a multi-stage build like so:
# Put common setup steps in an initial stage
FROM mcr.microsoft.com/mssql/server:2019-latest AS setup
ENV MSSQL_PID=Developer
ENV SA_PASSWORD=Password1? # (weak password, easily extracted with `docker inspect`)
ENV ACCEPT_EULA=Y # (legally probably the end user needs to accept this not the image builder)
# Have a stage specifically to populate the data directory
FROM setup AS data
# (copy-and-pasted from the question)
USER mssql
COPY rmsdev.bak / # not under /var/opt/mssql
RUN ( /opt/mssql/bin/sqlservr & ) | grep -q "Service Broker manager has started" \
&& /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P $SA_PASSWORD -Q 'RESTORE DATABASE rmsdev FROM DISK = "/rmsdev.bak" WITH MOVE "rmsdev" to "/var/opt/mssql/data/rmsdev.mdf", MOVE "rmsdev_Log" to "/var/opt/mssql/data/rmsdev_log.ldf", NOUNLOAD, STATS = 5' \
&& pkill sqlservr
# Final stage that actually will actually be run.
FROM setup
# Copy the prepopulated data tree, but not the backup file
COPY --from=data /var/opt/mssql /var/opt/mssql
# Use the default USER, CMD, etc. from the base SQL Server image
The standard Docker Hub open-source database images like mysql and postgres generally declare a VOLUME in their Dockerfile for the database data, which forces the data to be stored in a volume. The important thing this means is that you can't set up data in the image like this; you have to populate the data externally, and then copy the data tree outside of the Docker image system.
I'm following this guide to restore a database backup
https://learn.microsoft.com/en-us/sql/linux/tutorial-restore-backup-in-sql-server-container?view=sql-server-ver15
I used docker cp command to copy the DB backup files to the container
docker exec -it SQLContainer mkdir /var/opt/mssql/backup
docker cp MyDb.bak SQLContainer:/var/opt/mssql/backup/
However when trying to restore the DB by running the following query in SSMS, an error message is shown
RESTORE DATABASE MyDB FROM DISK='/var/opt/mssql/backup/MyDB.bak'
Operating system error 5(Access is denied.).
I tried copying using docker cp -a, which sets file ownership to same as destination, but I got this error.
docker cp -a MyDb.bak SQLContainer:/var/opt/mssql/backup/
Error response from daemon: getent unable to find entry "mssql" in passwd database
I'm using Microsoft's image and I don't know the password for root user, the container runs using mssql user, so chown doesn't work either. How can I change the file permissions so DB restore works?
Turns out when I copied the database backup files from the Windows host to the Ubuntu machine, the files were owned by root user and all other users didn't have read permission.
Adding read permission to the file before copying to the docker container works and the server was able to read the files.
sudo chmod a+r MyDb.bak
sudo docker cp MyDb.bak SQLContainer:/var/opt/mssql/backup/
I had a similar issue just today, about the same time as this incident, just attaching MDF and LDF files.
By doing the chmod go+w before copying files without the -a I was able to get SQL Server to treat them as writeable. Prior to that I was getting error messages whenever any update was performed.
On Docker for Windows and working with windows containers, I cannot get my persistent volume to work on the main database directory of the windows container. This would be C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\MSSQL\DATA
How can I get the benefits of persistent volumes for databases without having to mess with backups and restores into the mssql-server-container?
This may be because of the data directory having the master- and system-dbs stored inside this folder where I try to mount the persistent volume.
In SQL Server for linux containers this simply works, you can connect the persistent volume to /var/opt/mssql and have your database persistent.
I know I can recover a database from a backup into the container, but this has two major drawbacks: I have to have a big container size because I am working with a big database. So I extended the 20 GB limit of the container to 60 GB but... rebuilding the database each time from a backup is time consuming.
The second drawback is, if the mssql-dev container is killed, the database is lost, too. Any work on this database is then gone. This would be different if the database could reside on the persistent volume.
docker run -d -e sa_password=<Password> -e ACCEPT_EULA=Y -v "C:\mylocalfolder:C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\MSSQL\DATA" microsoft/mssql-server-windows-developer
The error message is 'failure in a Windows system call: the virtual computer or container was shutdown unexpectetly. (0xc0370106)
Workaround 1
connect persistent volume to another location like
c:\mydata to prevent the error message from above.
Then get the database connected to the server while not using the standard database folder.
Extract database .bak file, so there are mdf and log files
--Get the name of your DB
RESTORE FILELISTONLY
FROM DISK = 'c:\mydata'
GO
--do the extraction of the bak file to certain folder
RESTORE DATABASE mydatabase
FROM DISK = 'c:\mydata'
WITH REPLACE,
MOVE 'mydatabase' TO 'c:\mydata\extractedDb.mdf',
MOVE 'mydatabase_log' TO 'c:\mydata\extractedLog.ldf'
GO
With this done you should now have your database files ready on your persistent volume. Now attach the database to the server. This has to be done by creating a new db but this procedure only takes nanoseconds to complete!
CREATE DATABASE StackoverflowIsGreat
ON (FILENAME = 'c:\mydata\extractedDb.mdf'),
(FILENAME = 'c:\mydata\extractedLog.ldf')
FOR ATTACH;
Now the database is safe in a persistent volume. If the db-server container goes down or is rebuild, you simply run this last statement again (or even better implement it in your docker-compose or dockerfile):
CREATE DATABASE StackoverflowIsGreat
ON (FILENAME = 'c:\mydata\extractedDb.mdf'),
(FILENAME = 'c:\mydata\extractedLog.ldf')
FOR ATTACH;
Workaround 2
the -attach_dbs parameter seems to work the same way.
Docker run:
docker run -p 1433:1433 --name mssql-dev -e sa_password=<yourpassword> -e ACCEPT_EULA=Y -e attach_dbs="[{'dbName':'PowerSlide_SQLDB','dbFiles':['C:\\your\\path\\database.mdf','C:\\sqldata\\databaselog.ldf']}]" -v "d:\sqldata:C:\sqldata" microsoft/mssql-server-windows-developer
or if you prefer Docker-Compose, it is a little bit tricky. I had to omit the leading and closing ' outside of the brackets and replace the double quotation marks inside the brackets with ' to make it work.
version: '3.2'
services:
mssql-dev:
container_name: mssql-dev
image: 'microsoft/mssql-server-windows-developer'
volumes:
- "d:\\sqldata:C:\\sqldata"
ports:
- "1433:1433"
restart: always
environment:
- "ACCEPT_EULA=Y"
- "sa_password=yourpassword"
- attach_dbs=[{"dbName":"<yourDbName>","dbFiles":["C:\\<your>\\path\\database.mdf","C:\\your\\path\\databaselog.ldf"]}]
volumes:
mssql-dev-data:
It seems this question can be answered with workaround 1 and 2 from above.
Connect persistent volume to another location like c:\mydata to prevent the error message from above. Then get the database connected to the server while not using the standard database folder.
Extract database .bak file, so there are mdf and log files
--Get the name of your DB
RESTORE FILELISTONLY
FROM DISK = 'c:\mydata'
GO
--do the extraction of the bak file to certain folder
RESTORE DATABASE mydatabase
FROM DISK = 'c:\mydata'
WITH REPLACE,
MOVE 'mydatabase' TO 'c:\mydata\extractedDb.mdf',
MOVE 'mydatabase_log' TO 'c:\mydata\extractedLog.ldf'
GO
Attach the database to the server in one of the following three ways:
Docker run example:
docker run -p 1433:1433 --name mssql-dev -e sa_password=<yourpassword> -e ACCEPT_EULA=Y -e attach_dbs="[{'dbName':'PowerSlide_SQLDB','dbFiles':['C:\\your\\path\\database.mdf','C:\\sqldata\\databaselog.ldf']}]" -v "d:\sqldata:C:\sqldata" microsoft/mssql-server-windows-developer
if you prefer Docker-Compose, it is a little bit tricky. I had to omit the leading and closing ' outside of the brackets and replace the double quotation marks inside the brackets with ' to make it work. Example for docker-compose:
version: '3.2'
services:
mssql-dev:
container_name: mssql-dev
image: 'microsoft/mssql-server-windows-developer'
volumes:
- "d:\\sqldata:C:\\sqldata"
ports:
- "1433:1433"
restart: always
environment:
- "ACCEPT_EULA=Y"
- "sa_password=yourpassword"
- attach_dbs=[{"dbName":"<yourDbName>","dbFiles":["C:\\<your>\\path\\database.mdf","C:\\your\\path\\databaselog.ldf"]}]
volumes:
mssql-dev-data:
Or attach DB with SQL Command
CREATE DATABASE StackoverflowIsGreat
ON (FILENAME = 'c:\mydata\extractedDb.mdf'),
(FILENAME = 'c:\mydata\extractedLog.ldf')
FOR ATTACH;
I'm using MacOS Sierra with the latest version of the mssql docker file for linux.
I had built a database which grew to a size of ~69 GB. I started getting an error "Could not allocate a new page for database because of insufficient disk space in filegroup". I attempted to solve this problem by running this code:
USE [master]
GO
ALTER DATABASE [db]
MODIFY FILE ( NAME = N'db', FILEGROWTH = 512MB )
GO
ALTER DATABASE [db]
MODIFY FILE
(NAME = N'db_log', FILEGROWTH = 256MB )
GO
After doing this, I was no longer able to startup the the mssql container. I then manually replaced a backup copy of the container folder which in MacOs is called "com.docker.docker" and which contained the prior working version of the database.
After doing this, I stated getting the following error: "The extended event engine has been disabled by startup options. Features that depend on extended events may fail to start."
At this point I re-installed the docker container using the procedure mentioned in this post. the command I used was:
docker create -v /var/opt/mssql --name mssql microsoft/mssql-server-linux /bin/true
docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Test#123' -p 1433:1433 --volumes-from mssql -d --name sql-server microsoft/mssql-server-linux
Although now I'm able to start the server with the new container, I would like restore the original SQL server database (~69 GB). I tried doing so by again manually copying the file named "Docker.qcow2" into the docker container folder. This is obviously not working.
How can I restore my database?
I have a database running on SQL Server (13.01) on Windows. I like to deploy it to the Docker Container on Linux using SSDT.
I can perfectly connect to the server running on Docker and create/drop database manually and play with the data.
The problem is I can not publish it. I'm executing following script on Powershell
PS: SqlPackage.exe /Action:Publish /SourceFile:"d.dacpac" /TargetConnectionString:"server=containeraddress;database=thedatabase;user id=sa;password=thepassword;
and getting the following error.
Unable to connect to master or target server 'the database'. You must have a user with the same password in master or target server 'the database'. (Microsoft.Data.Tools.Schema.Sql)
I have the same user and same password on target and source servers.
Is there anybody has the same problem and know how to solve it?
I'll post this here as most of the answers refer to having an existing compiled dacpac file, which may not always be possible. I haven't seen similar ideas posted elsewhere to the solution I'm suggesting here.
Given your usage of docker and if you wish to compile your visual studio project inside the container, given certain combinations of the container base OS and image may not be possible to create a dacpac file with msbuild.
You can work around restoring the database using a series of unix based commands, taking note that the visual studio database project is usually just a series of SQL files, below I show an example of this, where I concat the SQL files into a single file and call sqlcmd to run the script;
FROM mcr.microsoft.com/mssql/server
WORKDIR /init
ENV ACCEPT_EULA=Y
ENV MSSQL_SA_PASSWORD=MyPassword
EXPOSE 1433:1433
RUN apt-get update && apt-get install dos2unix
COPY /solution_folder/database/Tables/*.sql /init/
WORKDIR /database
RUN echo "CREATE DATABASE [database_name];\nGO\nUSE [database_name];\n” >> /database/create.sql
RUN for f in /init/*.sql; do dos2unix $f; cat $f >> /database/create.sql; echo "\nGO\n" >> /database/create.sql; done
RUN ( /opt/mssql/bin/sqlservr --accept-eula & ) | grep -q "Service Broker manager has started" && /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P ‘MyPassword’ -i /database/create.sql && pkill sqlservr
The reason for "dos2unix" is that the SQL files created within visual studio have unique hidden cr/lf (and other characters) which the linux version of sqlcmd won't interpret successfully and will cause errors (which is kind of bizarre and this is exactly the kind of thing you'd want a cross platform database to be able to cope with)
Also, within the final run command you have to start-up the sql server service temporarily otherwise you'll also get errors; it's a little bit of work-around, and a bit fiddly and I'm not sure entirely that the microsoft sql server linux container is well designed enough for the simple task of restoring a database like this, the nuances are the differences between building and running a container and needing some sort of happy middle ground of both concepts for it to work.
Given here isn't a complete solution to restore, it only deals with Tables from the project file, although it should be trivial to expand to scalar functions and stored procedures.
Which version of SqlPackage.exe are you using? Only the most recent release candidate versions of SqlPackage.exe support SQL Server vNext CTP. The SqlPackage release candidate can be downloaded here: https://www.microsoft.com/en-us/download/details.aspx?id=54273