I have a docker-compose file that creates a starts SQL Server. This is working fine. I can connect to the database and see the master database.
What I am trying to do is create a new database, and add a table and some data to that table. I have been unable to find an example of doing this using SQL Server. All the examples I have seen are either using PostgreSQL or Mysql.
I have tried to adapt this example Docker Compose MySQL Multiple Database
I have created an init directory with a file called 01.sql and the only thing in it is
CREATE DATABASE `test`;
My docker-compose.yml looks like this
services:
db:
image: "mcr.microsoft.com/mssql/server"
ports:
- 1433:1433
volumes:
- ./init:/docker-entrypoint-initdb.d
environment:
SA_PASSWORD: "password123!"
ACCEPT_EULA: "Y"
When I run docker-compose up
I'm not seeing anything in the logs that implies it's even trying to load this file. When I check the database I do not see any new database either.
I am at a loss to understand why this isn't working for SQL Server but the tutorial implies that it works for MySql. Is there a different command for SQL Server?
After quite a bit of Googling and combining four or five very old tutorials, I got this working. Ensuring that you are using Linux line endings is critical with these scripts.
Docker-compose.yml
version: '3'
services:
db:
build: ./Db
ports:
- 1433:1433
Db/DockerFile
# Choose ubuntu version
FROM mcr.microsoft.com/mssql/server:2019-CU13-ubuntu-20.04
# Create app directory
WORKDIR /usr/src/app
# Copy initialization scripts
COPY . /usr/src/app
# Set environment variables, not have to write them with the docker run command
# Note: make sure that your password matches what is in the run-initialization script
ENV SA_PASSWORD password123!
ENV ACCEPT_EULA Y
ENV MSSQL_PID Express
# Expose port 1433 in case accessing from other container
# Expose port externally from docker-compose.yml
EXPOSE 1433
# Run Microsoft SQL Server and initialization script (at the same time)
CMD /bin/bash ./entrypoint.sh
Db/entrypoint.sh
# Run Microsoft SQl Server and initialization script (at the same time)
/usr/src/app/run-initialization.sh & /opt/mssql/bin/sqlservr
Db/run-initialization.sh
# Wait to be sure that SQL Server came up
sleep 90s
# Run the setup script to create the DB and the schema in the DB
# Note: make sure that your password matches what is in the Dockerfile
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P password123! -d master -i create-database.sql
Db/create-database.sql
CREATE DATABASE [product-db]
GO
USE [product-db];
GO
CREATE TABLE product (
Id INT NOT NULL IDENTITY,
Name TEXT NOT NULL,
Description TEXT NOT NULL,
PRIMARY KEY (Id)
);
GO
INSERT INTO [product] (Name, Description)
VALUES
('T-Shirt Blue', 'Its blue'),
('T-Shirt Black', 'Its black');
GO
Tip: If you change any of the scripts after running it the first time you need to do a docker-compose up --build to ensure that the container is built again or it will just be using your old scripts.
Connect:
host: 127.0.0.1
Username: SA
Password: password123!
Docker images for mysql and postgres know about docker-entrypoint-initdb.d directory, and how to process it.
For example, for mysql, here is Dockerfile
It runs script docker-entrypoint.sh:
COPY docker-entrypoint.sh /usr/local/bin/
RUN ln -s usr/local/bin/docker-entrypoint.sh /entrypoint.sh # backwards compat
ENTRYPOINT ["docker-entrypoint.sh"]
And docker-entrypoint.sh runs sql scripts from docker-entrypoint-initdb.d directory:
docker_process_init_files /docker-entrypoint-initdb.d/*
Looks like SQL Server docker image does not have processing for docker-entrypoint-initdb.d, you need to study SQL Server Dockerfile or documentation, probably there is some tools to init DB. If there not - you can create your own Docker image, basing on original:
Dockerfile:
FROM mcr.microsoft.com/mssql/server
# implement init db from docker-entrypoint-initdb.d
Related
I create a new image base on the "mcr.microsoft.com/mssql/server" image.
Then I have a script to create a new database with some tables with seeded data within the Dockerfile.
FROM mcr.microsoft.com/mssql/server
USER root
# CreateDb
COPY ./CreateDatabaseSchema.sql ./opt/scripts/
ENV ACCEPT_EULA=true
ENV MSSQL_SA_PASSWORD=myP#ssword#1
# Create database
RUN /opt/mssql/bin/sqlservr & sleep 60; /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P ${MSSQL_SA_PASSWORD} -d master -i /opt/scripts/CreateDatabaseSchema.sql
I can see the database created by my script if I don't attach it to a persistent volume, and DO NOT see the new database if I attach it to a persistent volume. I check the log and don't see any error. Looks like the system skip to process that file. What is the problem that might cause the environment to skip processing the SQL script whci defined in Dockerfile?
thanks,
Austin
The problem with using persistent volume is all the data in that directory is replaced by the base image. I need to learn how to create the database after volume mounts
volumeMounts:
- mountPath: /var/opt/mssql
You can use docker-compose.yml and Dockerfile. Both can work together.
version: '3.9'
services:
mysqlserver:
build:
context: ..
dockerfile: Dockerfile
restart: always
volumes:
- make/my/db/persistent:/var/opt/mssql
Then you can run it with:
docker-compose -f docker-compose.yml up
Have fun
I have a problem with connection to MSSQL Docker database created from docker-compose.yml file and additional .sql file with ready-to-go empty database configuration. To connect to the MSSQL database i have used DBeaver. During the connection this error occured:
Login failed for user 'system'. ClientConnectionId:23649321-6526-4ec2-a6ed-9943e64ca019
Probably i have wrong docker-compose.yml file or wrong .sql configuration file (.sql file should work with MSSQL format). .sql file is mounted in scipts folder inside mssql folder with .yml file.
docker-compose.yml file:
version: "3.7"
services:
sql-server-db:
container_name: sql-server-db
image: mcr.microsoft.com/mssql/server:2019-latest
ports:
- "1433:1433"
environment:
SA_PASSWORD: "secret123new!"
ACCEPT_EULA: "Y"
TRUSTED_CONNECTION: "TRUE"
volumes:
- ./data/mssql:/scripts/
.sql file:
CREATE DATABASE probna
GO
USE probna
GO
CREATE LOGIN system WITH PASSWORD='system'
GO
CREATE USER system FOR LOGIN system
GO
ALTER ROLE [db_owner] ADD MEMBER system
GO
CREATE TABLE Products (ID int, ProductName nvarchar(max))
GO
If you have any ideas or fixes to existing code in .yml or .sql file please write a comment :)
Have a nice day !
PS. do you think this added to .yml file will work ?
mssql:
image: mcr.microsoft.com/mssql/server:2017-latest
environment:
- SA_PASSWORD=Admin123
- ACCEPT_EULA=Y
volumes:
- ./data/mssql:/scripts/
command:
- /bin/bash
- -c
- |
# Launch MSSQL and send to background
/opt/mssql/bin/sqlservr &
# Wait 30 seconds for it to be available
# (lame, I know, but there's no nc available to start prodding network ports)
sleep 30
# Run every script in /scripts
# TODO set a flag so that this is only done once on creation,
# and not every time the container runs
for foo in /scripts/*.sql
do /opt/mssql-tools/bin/sqlcmd -U sa -P $$SA_PASSWORD -l 30 -e -i $$foo
done
# So that the container doesn't shut down, sleep this thread
sleep infinity
if i have setup.sql file in scripts folder is this enough ? i had an error
sql-server-db | Sqlcmd: '/scripts/*.sql': Invalid filename.
what to change in that .yml file or should i add en extra file to scripts folder ?
I have a container deploying a front end in ASP.NET Core trying to connect to the backend SQL Server database. I am running windows 10 with Docker desktop v19.03.13.
The website container is built on
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env
WORKDIR /app
EXPOSE 80
# Copy csproj and restore as distinct layers
COPY ./FOOBAR/*.csproj ./
RUN dotnet restore
# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
# Build runtime image
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "FOOBAR.dll"]
The database is built on
FROM mcr.microsoft.com/mssql/server:2017-latest
USER root
COPY setup.sql setup.sql
COPY import-data.sh import-data.sh
COPY entrypoint.sh entrypoint.sh
RUN chmod +x entrypoint.sh
CMD /bin/bash ./entrypoint.sh
Everything works marvelous when running outside docker, .NET, Python, SQL Server Management Studio.
In .NET, my connection string is:
Server=localhost;Database=FOOBAR;Integrated Security=True
In Python:
DRIVER={ODBC Driver 17 for SQL Server};server=localhost;database=FOOBAR;Trusted_Connection=yes;
So I need to deploy this to a network that does not have a domain controller so I need to handle all database authentication.
When I build my containers, I change my .NET connection string to
dbConnection="Server=host.docker.internal;Database=FOOBAR;User Id=sa;Password=Password1!;"
I spawn my containers with
docker run -e ACCEPT_EULA=Y -e SA_PASSWORD=Password1! -p 1433:1433 -v c:\temp\:/var/opt/mssql/data --name foobar_db -d foobar_db:1.0
docker run -p 8080:80 --name foobar --link foobar_db:foobar_db -d foobar:1.0
My containers spin up, my database is deployed just fine. From the host, I can use SQL Server Management Studio and Python, and connect to my database container using the credentials above, and connect and perform read/writes perfectly.
When I connect from the .NET using
Server=host.docker.internal;Database=FOOBAR;User Id=sa;Password=Password1!;
I can see my SQL Server container complain about an invalid login,
Login failed for user '6794cfd81d48\Guest'
where I can confirm that 6794cfd81d48 is the hash of my SQL Server container, foobar_db.
IIS serves up webpages just fine, the problem is connecting to the database. Even though I am providing the correct username and password, I am unable to connect from another container to the SQL Server container because it thinks that I am a guest to that container. Depending on the deployment environment, normally I would create SQL Server logins, either for a machine or for a user, but not in this case.
There was an offending IT security software application that was identified as the cause. IT set a passthrough so the docker applications could by the security and everything worked as designed.
When I run latest sql server image from official documentation on linux host.
docker run -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=asdasdasdsad' -p 1433:1433 -v ./data:/var/opt/mssql/data -d mcr.microsoft.com/mssql/server:2019-latest
I get error:
ERROR: Setup FAILED copying system data file 'C:\templatedata\model_replicatedmaster.mdf' to '/var/opt/mssql/data/model_replicatedmaster.mdf': 5(Access is denied.)
This message occurs only on Linux host and with binded volumes.
I happen because lack of permission. On 2019 mssql docker move from root user images into not-root. It made that docker sql-server containers with binded volumes and run on Linux host has a permission issue (=> has no permission to write into binded volume).
There are few solution for this problem:
1. Run docker as root.
eg. compose:
version: '3.6'
services:
mssql:
image: mcr.microsoft.com/mssql/server:2019-latest
user: root
ports:
- 1433:1433
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=BLAH
volumes:
- ./data:/var/opt/mssql/data
Source: https://github.com/microsoft/mssql-docker/issues/13#issuecomment-641904197
2. Setup proper directory owner (mssql)
Check id for mssql user on docker image
sudo docker run -it mcr.microsoft.com/mssql/server id mssql
gives: uid=10001(mssql) gid=0(root) groups=0(root)
Change folder's owner
sudo chown 10001 VOLUME_DIRECTORY
Source in Spanish: https://www.eiximenis.dev/posts/2020-06-26-sql-server-docker-no-se-ejecuta-en-root/
3. Give a full access (not recommended)
Give full access to db files on host
sudo chmod 777 -R VOLUME_DIRECTORY
Unfortunately, the only way I found to fix this issue involves a few manual steps.
I used the following docker-compose file for this to work
version: '3.9'
services:
mssql:
image: mcr.microsoft.com/mssql/server:2019-latest
platform: linux
ports:
- 1433:1433
environment:
- ACCEPT_EULA=Y
- MSSQL_SA_PASSWORD=<testPASSWORDthatISlongENOUGH_1234>
volumes:
- ./mssql/data:/var/opt/mssql/data
- ./backups:/var/backups
(the data directory has to be mounted directly due to another issue with SQL server containers hosted on Windows machines)
Then you need to perform the following manual steps:
Connect to the database using SSMS
Find and select your .bak database backup file
Open a terminal in the container
In the directory that the .mdf and .ldf files are going to be created, touch files with the database name you are going to use
touch /var/opt/mssql/data/DATABASE_NAME.mdf
touch /var/opt/mssql/data/DATABASE_NAME_log.ldf
Toggle the option to replace any existing database with the restore
Restore your database
I tried to follow the instructions in this https://www.sqlservercentral.com/blogs/using-volumes-in-sql-server-2019-non-root-containers article but I could not get it to work.
This problem was also discussed in this github issue (which the bot un-helpfully closed without a proper solution).
I encoutered the same problem as you trying to run a container based on sql server on DigitalOcean. user: root also solved the issue.
I have generated a docker file for asp.net core API with a single page application thanks to Visual studio. After some research on the web I correct differents trouble about SPA in this docker file.
Finnaly my trouble is the connexion with our database server.
When I tried to connect, I've got a
Microsoft.Data.SqlClient.SqlException : A network-related or instance-specific error occurred while establishing a connection to SQL Server.
It seems that it appears because my container could not acces to the server, after hour and hour of google search, I only found solution with a SQL hosted in docker image.
How to all my docker image of wab app accessing the entire company network to access different server ? I use computer name ant not IP to match company requirement.
Thanks for all
Versions :
.net core api : 3.1
I'm using docker for Windows
docker use linux container
Here is my docker file
#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
RUN apt-get update -yq \
&& apt-get install curl gnupg -yq \
&& curl -sL https://deb.nodesource.com/setup_10.x | bash \
&& apt-get install nodejs -yq
WORKDIR /src
COPY ["Company.Dtm.WebApi.AppWebApi/Company.Dtm.WebApi.AppWebApi.csproj", "Company.Dtm.WebApi.AppWebApi/"]
COPY ["CompanyFramework/Company.Framework.WebApi/Company.Framework.WebApi.csproj", "CompanyFramework/Company.Framework.WebApi/"]
COPY ["CompanyFramework/Company.Framework.Model/Company.Framework.Model.csproj", "CompanyFramework/Company.Framework.Model/"]
COPY ["CompanyFramework/Company.Framework.Tools/Company.Framework.Tools.csproj", "CompanyFramework/Company.Framework.Tools/"]
COPY ["AppLib/Company.Dtm.Lib.AppLib/Company.Dtm.Lib.AppLib.csproj", "AppLib/Company.Dtm.Lib.AppLib/"]
RUN dotnet restore "Company.Dtm.WebApi.AppWebApi/Company.Dtm.WebApi.AppWebApi.csproj"
COPY . .
WORKDIR "/src/Company.Dtm.WebApi.AppWebApi"
RUN dotnet build "Company.Dtm.WebApi.AppWebApi.csproj" -c Debug -o /app/build
FROM build AS publish
RUN dotnet publish "Company.Dtm.WebApi.AppWebApi.csproj" -c Debug -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "Company.Dtm.WebApi.AppWebApi.dll"]
Here is my docker-compose
version: '3'
services:
webapp:
build: .
network_mode: "bridge"
ports:
- "8880:80"
I also had this issue, just trying to connect to my localhost development SQL Server.
What ended up working was to add the normal SQL Server ports to my Dockerfile:
EXPOSE 1433
EXPOSE 5000 [or whatever other ports you may be using.]
Then set up a firewall Inbound Rule to allow those ports.
You cannot use 'localhost' obviously since the 'localhost' is the host the container it is running in, but I did find with Windows at least, that I can simply use my dev machine name as the server, so it seems that DNS works across the nat. I would think you should be able to access any network resource at that point, but I would say your firewall[s] might be a place to start. Your Docker container acts like an external network and therefore generally un-trusted.
I also found that I did not have a 'bridge' network. Maybe you get that with the Linux container.
My >docker network ls command revealed a "Default Switch" network, but no "bridge". Because this is Docker for Windows, there is no 'host' option.
That was all there was to it for me. I see a lot of other posts talking about a lot of other things, but honestly, just opening up the firewall is what did the trick. Good luck!
You need to add another service for your db in your compose file.
Something like this:
version: "3"
services:
web:
build: .
ports:
- "8000:80"
depends_on:
- db
db:
image: "mcr.microsoft.com/mssql/server"
environment:
SA_PASSWORD: "Your_password123"
ACCEPT_EULA: "Y"
make sure to replace the password in the SA_PASSWORD environment variable under db.