At our company we use a couple methods of updating database models. we use sqlproj and dacpacs as well as we have a .net core app using DbUp. I have successfully containerized using dacpac based of this article automating-sql-server-2019-docker-deployments
Now I am working on Db Up. There are a few challenges that I've tried to sort out but I am not able to sort through them.
My first though was to gen a dockerfile based off mcr.microsoft.com/mssql/server:2019-latest then install the .net core runtime and run the built dbup inside sql container. It has been hard to get the .net core runtime running I just can't seem to get it to work.
I was trying to use docker-compose to buildjust a norm sql db and also run the dbUp using a base image mcr.microsoft.com/dotnet/core/sdk:3.1 AS build and connect to the sql db this way. this is also not working
here is my docker-compose.yml file(sorry for some reason i can't coy the code correctly here:
version: "3.7" services: ms-sql-server:
image: mcr.microsoft.com/mssql/server:2019-latest
ports:
- "1477:1433"
environment:
SA_PASSWORD: "SuperFun!23"
ACCEPT_EULA: "Y" dbup-exe:
build: .
depends_on:
- ms-sql-server
In my Dockerfile I am calling the db but it won't connect to it
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
WORKDIR /src
COPY ["/Sentinel.DbUp.csproj", "Sentinel.DbUp/"]
RUN dotnet restore "Sentinel.DbUp/Sentinel.DbUp.csproj"
WORKDIR "/src/Sentinel.DbUp"
COPY . .
RUN dotnet build "Sentinel.DbUp.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "Sentinel.DbUp.csproj" -c Release -o /app
RUN dotnet run Sentinel.DbUp --ConnectionString=Server=ms-sql-server,1477;Database=Sentinel_Local;User Id=sa;Password=SuperFun!23; --WithSeedOnce --EnsureDatabase --PerformUpgrade
I have two errors
System.Data.SqlClient.SqlException (0x80131904): A network-related or
instance-specific error occurred while establishing a connection to
SQL Server. The server was not found or was not accessible
/bin/sh: 1: User: not found /bin/sh: 1: --WithSeedOnce: not found
Based on my research, I though when using docker compose between docker images you can reference ms-sql-server instead of local host.
also when executing the last command how do I append multiple arguments? it seems --WithSeedOnce is not considered and argument for the app
So, from what I gather, there are two things that jump off the page:
In a dockerfile, RUN instructions are executed when the image is built. Entrypoint or command instructions are executed when the container is started. As such, the command to start the DbUp (which I assume is the last line in your current dockerfile) should be changed from RUN to entrypoint.
using the depends on instruction does not mean that the docker will wait for your database to be ready before starting the DbUp container. It just means that the database container will be launch before your DbUp is launched. Instead you have to wait for the DB in you code or using a wrapper script like wait-for-it. Check the documentation here
When I run latest sql server image from official documentation on linux host.
docker run -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=asdasdasdsad' -p 1433:1433 -v ./data:/var/opt/mssql/data -d mcr.microsoft.com/mssql/server:2019-latest
I get error:
ERROR: Setup FAILED copying system data file 'C:\templatedata\model_replicatedmaster.mdf' to '/var/opt/mssql/data/model_replicatedmaster.mdf': 5(Access is denied.)
This message occurs only on Linux host and with binded volumes.
I happen because lack of permission. On 2019 mssql docker move from root user images into not-root. It made that docker sql-server containers with binded volumes and run on Linux host has a permission issue (=> has no permission to write into binded volume).
There are few solution for this problem:
1. Run docker as root.
eg. compose:
version: '3.6'
services:
mssql:
image: mcr.microsoft.com/mssql/server:2019-latest
user: root
ports:
- 1433:1433
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=BLAH
volumes:
- ./data:/var/opt/mssql/data
Source: https://github.com/microsoft/mssql-docker/issues/13#issuecomment-641904197
2. Setup proper directory owner (mssql)
Check id for mssql user on docker image
sudo docker run -it mcr.microsoft.com/mssql/server id mssql
gives: uid=10001(mssql) gid=0(root) groups=0(root)
Change folder's owner
sudo chown 10001 VOLUME_DIRECTORY
Source in Spanish: https://www.eiximenis.dev/posts/2020-06-26-sql-server-docker-no-se-ejecuta-en-root/
3. Give a full access (not recommended)
Give full access to db files on host
sudo chmod 777 -R VOLUME_DIRECTORY
Unfortunately, the only way I found to fix this issue involves a few manual steps.
I used the following docker-compose file for this to work
version: '3.9'
services:
mssql:
image: mcr.microsoft.com/mssql/server:2019-latest
platform: linux
ports:
- 1433:1433
environment:
- ACCEPT_EULA=Y
- MSSQL_SA_PASSWORD=<testPASSWORDthatISlongENOUGH_1234>
volumes:
- ./mssql/data:/var/opt/mssql/data
- ./backups:/var/backups
(the data directory has to be mounted directly due to another issue with SQL server containers hosted on Windows machines)
Then you need to perform the following manual steps:
Connect to the database using SSMS
Find and select your .bak database backup file
Open a terminal in the container
In the directory that the .mdf and .ldf files are going to be created, touch files with the database name you are going to use
touch /var/opt/mssql/data/DATABASE_NAME.mdf
touch /var/opt/mssql/data/DATABASE_NAME_log.ldf
Toggle the option to replace any existing database with the restore
Restore your database
I tried to follow the instructions in this https://www.sqlservercentral.com/blogs/using-volumes-in-sql-server-2019-non-root-containers article but I could not get it to work.
This problem was also discussed in this github issue (which the bot un-helpfully closed without a proper solution).
I encoutered the same problem as you trying to run a container based on sql server on DigitalOcean. user: root also solved the issue.
I have generated a docker file for asp.net core API with a single page application thanks to Visual studio. After some research on the web I correct differents trouble about SPA in this docker file.
Finnaly my trouble is the connexion with our database server.
When I tried to connect, I've got a
Microsoft.Data.SqlClient.SqlException : A network-related or instance-specific error occurred while establishing a connection to SQL Server.
It seems that it appears because my container could not acces to the server, after hour and hour of google search, I only found solution with a SQL hosted in docker image.
How to all my docker image of wab app accessing the entire company network to access different server ? I use computer name ant not IP to match company requirement.
Thanks for all
Versions :
.net core api : 3.1
I'm using docker for Windows
docker use linux container
Here is my docker file
#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
RUN apt-get update -yq \
&& apt-get install curl gnupg -yq \
&& curl -sL https://deb.nodesource.com/setup_10.x | bash \
&& apt-get install nodejs -yq
WORKDIR /src
COPY ["Company.Dtm.WebApi.AppWebApi/Company.Dtm.WebApi.AppWebApi.csproj", "Company.Dtm.WebApi.AppWebApi/"]
COPY ["CompanyFramework/Company.Framework.WebApi/Company.Framework.WebApi.csproj", "CompanyFramework/Company.Framework.WebApi/"]
COPY ["CompanyFramework/Company.Framework.Model/Company.Framework.Model.csproj", "CompanyFramework/Company.Framework.Model/"]
COPY ["CompanyFramework/Company.Framework.Tools/Company.Framework.Tools.csproj", "CompanyFramework/Company.Framework.Tools/"]
COPY ["AppLib/Company.Dtm.Lib.AppLib/Company.Dtm.Lib.AppLib.csproj", "AppLib/Company.Dtm.Lib.AppLib/"]
RUN dotnet restore "Company.Dtm.WebApi.AppWebApi/Company.Dtm.WebApi.AppWebApi.csproj"
COPY . .
WORKDIR "/src/Company.Dtm.WebApi.AppWebApi"
RUN dotnet build "Company.Dtm.WebApi.AppWebApi.csproj" -c Debug -o /app/build
FROM build AS publish
RUN dotnet publish "Company.Dtm.WebApi.AppWebApi.csproj" -c Debug -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "Company.Dtm.WebApi.AppWebApi.dll"]
Here is my docker-compose
version: '3'
services:
webapp:
build: .
network_mode: "bridge"
ports:
- "8880:80"
I also had this issue, just trying to connect to my localhost development SQL Server.
What ended up working was to add the normal SQL Server ports to my Dockerfile:
EXPOSE 1433
EXPOSE 5000 [or whatever other ports you may be using.]
Then set up a firewall Inbound Rule to allow those ports.
You cannot use 'localhost' obviously since the 'localhost' is the host the container it is running in, but I did find with Windows at least, that I can simply use my dev machine name as the server, so it seems that DNS works across the nat. I would think you should be able to access any network resource at that point, but I would say your firewall[s] might be a place to start. Your Docker container acts like an external network and therefore generally un-trusted.
I also found that I did not have a 'bridge' network. Maybe you get that with the Linux container.
My >docker network ls command revealed a "Default Switch" network, but no "bridge". Because this is Docker for Windows, there is no 'host' option.
That was all there was to it for me. I see a lot of other posts talking about a lot of other things, but honestly, just opening up the firewall is what did the trick. Good luck!
You need to add another service for your db in your compose file.
Something like this:
version: "3"
services:
web:
build: .
ports:
- "8000:80"
depends_on:
- db
db:
image: "mcr.microsoft.com/mssql/server"
environment:
SA_PASSWORD: "Your_password123"
ACCEPT_EULA: "Y"
make sure to replace the password in the SA_PASSWORD environment variable under db.
I am using an Azure DSVM in a DevTest Lab running Windows Server 2019. I am trying to get Docker installed and working to allow me to run local experiments from Azure ML Service environments.
I want to build a custom Linux container on Docker - which I believe is possible on Windows from reading some other online posts (I can't use a Linux host for various reasons). When I try to create such an image that contains a WORKDIR ... step, I get a "container ***** encountered an error during CreateProcess: failure in a Windows system call" error.
I installed Docker on the DSVM (which is a Standard D2s_v3) by adding the "Docker" artifact at creation and then running the following commands to enable Linux containers:
$> Install-WindowsFeature -Name Hyper-V -IncludeManagementTools -Restart
$> [Environment]::SetEnvironmentVariable("LCOW_SUPPORTED", "1", "Machine")
Running a simple Linux container works fine:
$> docker run --rm -it alpine:latest
/ # ls
bin dev etc home lib media mnt opt proc root run sbin srv sys tmp usr var
/ #
To build a custom image, I'm using a simple Dockerfile as follows:
FROM alpine:latest
WORKDIR /abm
The image appears to build successfully:
$> docker build --no-cache -t abm-alpine:workdir -f .\abm-alpine.Dockerfile .
Sending build context to Docker daemon 2.048kB
Step 1/2 : FROM alpine:latest
---> a187dde48cd2
Step 2/2 : WORKDIR /abm
---> 495f8ecb3a0e
Removing intermediate container 219e91296e47
Successfully built 495f8ecb3a0e
Successfully tagged abm-alpine:workdir
When I run the image, I get the following error:
$> docker run --rm -it abm-alpine:workdir
C:\Program Files\Docker\docker.exe: Error response from daemon: container 01fad57c971d672d91238a6c6ec21376e033006ec4c26563e91e7288cfb3bfeb encountered an error during CreateProcess: failure in a Windows system call: The virtual machine or container exited unexpectedly. (0xc0370106) extra info: {"CommandArgs":["/bin/sh"],"WorkingDirectory":"/abm","Environment":{"HOSTNAME":"01fad57c971d","PATH":"/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","TERM":"xterm"},"EmulateConsole":true,"CreateStdInPipe":true,"CreateStdOutPipe":true,"ConsoleSize":[50,120],"OCISpecification":{"ociVersion":"1.0.0","process":{"terminal":true,"consoleSize":{"height":50,"width":120},"user":{"uid":0,"gid":0},"args":["/bin/sh"],"env":["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","HOSTNAME=01fad57c971d","TERM=xterm"],"cwd":"/abm","capabilities":{"bounding":["CAP_CHOWN","CAP_DAC_OVERRIDE","CAP_FSETID","CAP_FOWNER","CAP_MKNOD","CAP_NET_RAW","CAP_SETGID","CAP_SETUID","CAP_SETFCAP","CAP_SETPCAP","CAP_NET_BIND_SERVICE","CAP_SYS_CHROOT","CAP_KILL","CAP_AUDIT_WRITE"],"effective":["CAP_CHOWN","CAP_DAC_OVERRIDE","CAP_FSETID","CAP_FOWNER","CAP_MKNOD","CAP_NET_RAW","CAP_SETGID","CAP_SETUID","CAP_SETFCAP","CAP_SETPCAP","CAP_NET_BIND_SERVICE","CAP_SYS_CHROOT","CAP_KILL","CAP_AUDIT_WRITE"],"inheritable":["CAP_CHOWN","CAP_DAC_OVERRIDE","CAP_FSETID","CAP_FOWNER","CAP_MKNOD","CAP_NET_RAW","CAP_SETGID","CAP_SETUID","CAP_SETFCAP","CAP_SETPCAP","CAP_NET_BIND_SERVICE","CAP_SYS_CHROOT","CAP_KILL","CAP_AUDIT_WRITE"],"permitted":["CAP_CHOWN","CAP_DAC_OVERRIDE","CAP_FSETID","CAP_FOWNER","CAP_MKNOD","CAP_NET_RAW","CAP_SETGID","CAP_SETUID","CAP_SETFCAP","CAP_SETPCAP","CAP_NET_BIND_SERVICE","CAP_SYS_CHROOT","CAP_KILL","CAP_AUDIT_WRITE"]}},"root":{"path":"rootfs"},"hostname":"01fad57c971d","mounts":[{"destination":"/proc","type":"proc","source":"proc","options":["nosuid","noexec","nodev"]},{"destination":"/dev","type":"tmpfs","source":"tmpfs","options":["nosuid","strictatime","mode=755","size=65536k"]},{"destination":"/dev/pts","type":"devpts","source":"devpts","options":["nosuid","noexec","newinstance","ptmxmode=0666","mode=0620","gid=5"]},{"destination":"/sys","type":"sysfs","source":"sysfs","options":["nosuid","noexec","nodev","ro"]},{"destination":"/sys/fs/cgroup","type":"cgroup","source":"cgroup","options":["ro","nosuid","noexec","nodev"]},{"destination":"/dev/mqueue","type":"mqueue","source":"mqueue","options":["nosuid","noexec","nodev"]},{"destination":"/dev/shm","type":"tmpfs","source":"shm","options":["nosuid","noexec","nodev","mode=1777"]}],"linux":{"resources":{"devices":[{"allow":false,"access":"rwm"},{"allow":true,"type":"c","major":1,"minor":5,"access":"rwm"},{"allow":true,"type":"c","major":1,"minor":3,"access":"rwm"},{"allow":true,"type":"c","major":1,"minor":9,"access":"rwm"},{"allow":true,"type":"c","major":1,"minor":8,"access":"rwm"},{"allow":true,"type":"c","major":5,"minor":0,"access":"rwm"},{"allow":true,"type":"c","major":5,"minor":1,"access":"rwm"},{"allow":false,"type":"c","major":10,"minor":229,"access":"rwm"}]},"namespaces":[{"type":"mount"},{"type":"network"},{"type":"uts"},{"type":"pid"},{"type":"ipc"}],"maskedPaths":["/proc/kcore","/proc/latency_stats","/proc/timer_list","/proc/timer_stats","/proc/sched_debug"],"readonlyPaths":["/proc/asound","/proc/bus","/proc/fs","/proc/irq","/proc/sys","/proc/sysrq-trigger"]},"windows":{"layerFolders":["C:\\ProgramData\\docker\\lcow\\5ba6a7b4fbdf9748ec89898be9bdaa911ee614436a475945638ab296b1155966","C:\\ProgramData\\docker\\lcow\\01fad57c971d672d91238a6c6ec21376e033006ec4c26563e91e7288cfb3bfeb"],"hyperv":{},"network":{"endpointList":["D615E3D5-B6AA-401E-A0A0-72581FA47059"],"allowUnqualifiedDNSQuery":true}}}}.
I've tried various logs (e.g. Get-WinEvent -LogName Microsoft-Windows-Hyper-V-Compute-Operational and Get-EventLog -LogName Application -Source Docker) but cannot see any additional information about the error.
Can anyone advise if it is possible to create custom Linux-based images on a Windows DSVM? If it is, can anyone advise what the problem may be or any additional troubleshooting steps I could take?
Thanks!
It is possible to create Linux container on Windows Server.
Although this is currently in experimental stage.
This article might help : https://www.b2-4ac.com/lcow-linux-containers-on-windows-server/
I'm trying to use Docker to improve my workflow. I installed "Docker Toolbox for Windows" on my Windows 10 home edition (since Docker supposedly only work on professional). I'm using mgexhev's angular-seed which claim to provide full docker support. There is a docker-compose.yml file which links a ./.docker/angular-seed.development.dockerfile.
After git cloning the seed project I can start it by running the commands given on the seed project's github page. So I can see the app after running:
$ docker-compose build
$ docker-compose up -d
But when I change code with Visual Studio Code and save the livereload doesn't work. The only way I can see my changes is by re-running the build and up commands (which re-runs npm install; 5min).
In Docker's documentation they say to "Mount a host directory as a data volume" in order to be able to "change the source code and see its effect on the application in real time"
docker run -v //c/<path>:/<container path>
But I'm not sure this is right when I'm using docker-compose? I have also tried running:
docker run -d -P --name web -v //c/Users/k/dev/:/home/app/ angular-seed
docker run -p 5555:5555 -v //c/Users/k/dev/:/home/app/ -w "/home/app/" angular-seed
docker run -p 5555:5555 -v $(pwd):/home/app/ -w "/home/app/" angular-seed
and lots of similar commands but nothing seems to work.
I tried moving my project from C:/dev/project to home because I read somewhere that there might be some access right issues not using the "home" directory, but this made no difference.
I'm also a bit confused that the instructions say visit localhost:5555. I have to go to dockerIP:5555 to see the app (in case this help anyone understand why my code doesn't update inside of my docker container).
Surely my changes should move in to the docker environment automatically or docker is not very useful for development :)
Looking at the docker-compose.yml you've linked to, I don't see any volume entry. Without that, there's no connection possible between the files on your host and the files inside the container. You'll need a docker-compose.yml that includes a volume entry, like:
version: '2'
services:
angular-seed:
build:
context: .
dockerfile: ./.docker/angular-seed.development.dockerfile
command: npm start
container_name: angular-seed-start
image: angular-seed
networks:
- dev-network
ports:
- '5555:5555'
volumes:
- .:/home/app/angular-seed
networks:
dev-network:
driver: bridge
Docker-machine runs docker inside of a virtual box VM. By default, I believe c:\Users is shared into the VM, but you'll need to check the virtual box settings to confirm this. Any host directories you try to map into the container are mapped from the VM, so if your folder is not shared into that VM, your files won't be included.
With the IP, localhost works on Linux hosts and newer versions of docker for windows/mac. Older docker-machine based installs need to use the IP of the virtual box VM.