No connections to mssql in docker compose - sql-server

I have two containers that run via docker-compose, my docker-compose looks like this
version: "3.7"
services:
mssql:
build: ./Db
ports:
- 1433:1433
planning-poker:
build: .
restart: always
env_file:
- env.list
ports:
- 80:8080
depends_on:
- mssql
Dockerfile go-app:
FROM golang:latest
RUN apt-get -y update && \
apt-get install -y net-tools && \
apt-get install -y iputils-ping && \
apt-get install -y telnet
ADD . /go/src/planning-poker
WORKDIR /go/src/planning-poker
RUN go build -o main .
ENTRYPOINT ["./main"]
Dockerfile mssql:
FROM mcr.microsoft.com/mssql/server
ENV ACCEPT_EULA=Y
ENV MSSQL_SA_PASSWORD=Yukon_900
EXPOSE 1433
USER root
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN chmod +x ./run-initialization.sh
USER mssql
CMD /bin/bash ./entrypoint.sh
I am using database initialization scripts:
for i in {1..50};
do
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P Yukon_900 -d master -i SQL_PlanningPoker.sql
if [ $? -eq 0 ]
then
echo "SQL_PlanningPoker.sql completed"
break
else
echo "not ready yet..."
sleep 1
fi
done
So is my entrypoint:
./run-initialization.sh & /opt/mssql/bin/sqlservr
The problem is that I can’t connect to mssql from the container with the golang application in any way, the connection passes from the host, I tried to connect via telnet to mssql from the go-app container on localhost 1433, 127.0.0.1 1433, 0.0.0.0 1433 but always I get an error that the connection is either reset or telnet cannot resolve the addresses.
MyProject: https://github.com/philomela/PlanningPoker/tree/master - master branch.
What am I doing wrong? thank you in advance!

try adding network and kepp all services run in same network
networks:
my-network:
services:
mssql:
build: ./Db
ports:
- 1433:1433
networks:
- my-network
planning-poker:
build: .
restart: always
env_file:
- env.list
ports:
- 80:8080
depends_on:
- mssql
networks:
- my-network
Also there is a possibility to check service is healthy with some heath check rather than just depends_on because docker may be up but SQL server would take some time to be up and running
https://docs.docker.com/engine/reference/builder/#healthcheck

The issue is resolved, I suffered for several days, but did not pay attention to port forwarding in my application, now everything is working stably!

Related

Docker hostname in compose for SQL Server images

When spinning up a new SQL Server 2022 (or 2019) Docker container using -h or --hostname
docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=dev1234#" -p 1633:1433 --name sql22new --hostname sql22new -d mcr.microsoft.com/mssql/server:2022-latest
the value for ##SERVERNAME is (as expected) sql22new.
When running from docker compose, then the value for ##SERVERNAME is buildkitsandbox.
Did anyone else came across this and solved it and would like to help?
I need to have the value of ##SERVERNAME show the hostname set correctly by compose.
If we look inside a container, in /etc/hosts the values are ok and also in /etc/hostname.
Details
Dockerfile - image is called sqlha
FROM mcr.microsoft.com/mssql/server:2022-latest
ENV ACCEPT_EULA=Y
ENV SA_PASSWORD=dev1234#
ENV MSSQL_PID=Developer
ENV MSSQL_TCP_PORT=1433
ENV MSSQL_AGENT_ENABLED=True
ENV MSSQL_ENABLE_HADR=1
WORKDIR /src
RUN mkdir /var/opt/mssql/backup/
RUN mkdir /tmp/certificates/
RUN mkdir /tmp/scripts/
COPY ./cert/* /tmp/certificates/
COPY *.sql /tmp/scripts/
RUN (/opt/mssql/bin/sqlservr --accept-eula & ) | grep -q "Service Broker manager has started" && /opt/mssql-tools/bin/sqlcmd -S127.0.0.1 -Usa -Pdev1234# -i /tmp/scripts/setup.sql
docker-compose.yml
version: '3.9'
networks:
db-server-network:
ipam:
driver: default
config:
- subnet: "172.16.238.0/24"
services:
db1:
container_name: sqlNode1
image: sqlha
hostname: sqlNode1
ports:
- "1501:1433"
extra_hosts:
- "sqlNode2:172.16.238.12"
- "sqlNode3:172.16.238.13"
networks:
db-server-network:
ipv4_address: 172.16.238.11
aliases:
- sqlNode1
db2:
container_name: sqlNode2
image: sqlha
hostname: sqlNode2
ports:
- "1502:1433"
extra_hosts:
- "sqlNode1:172.16.238.11"
- "sqlNode3:172.16.238.13"
networks:
db-server-network:
ipv4_address: 172.16.238.12
aliases:
- sqlNode2
db3:
container_name: sqlNode3
image: sqlha
hostname: sqlNode3
ports:
- "1503:1433"
extra_hosts:
- "sqlNode1:172.16.238.11"
- "sqlNode2:172.16.238.12"
networks:
db-server-network:
ipv4_address: 172.16.238.13
aliases:
- sqlNode3
When mounting directly:
docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=dev1234#" -p 1633:1433 --name sql22new --hostname sql22new -d mcr.microsoft.com/mssql/server:2022-latest
When mounting with compose:
sqlNode1 container inspection:
docker exec -u 0 -it sqlNode1 bash
then
apt-get update && apt-get install nano -y
nano /etc/hosts
nano /etc/hostname

EF Core7 , RemoteCertificateValidationCallback error was throwd in docker container

I have this connection string:
server=xxx.xxx.xxx.xxx,1432;database=DbName;user id=sa;password=DbPassword123;TrustServerCertificate=True;
I use .NET 7 and EF Core 7 - it works fine on my local PC.
But when I run this project in a Docker container, it throws this error:
Microsoft.Data.SqlClient.SqlException (0x80131904): A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: TCP Provider, error: 35 - An internal exception was caught)
System.Security.Authentication.AuthenticationException: The remote certificate was rejected by the provided RemoteCertificateValidationCallback.
I use this Dockerfile:
FROM mcr.microsoft.com/dotnet/aspnet:7.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:7.0 AS build
WORKDIR /src
COPY ["Project.Api/Project.Api.csproj", "Project.Api/"]
COPY ["Project.Module.Admin/Project.Module.Admin.csproj", "Project.Module.Admin/"]
COPY ["Project.Module.Shared/Project.Module.Shared.csproj", "Project.Module.Shared/"]
COPY ["Project.Data/Project.Data.csproj", "Project.Data/"]
COPY ["Project.Infrastructure/Project.Infrastructure.csproj", "Project.Infrastructure/"]
COPY ["Project.Identity/Project.Identity.csproj", "Project.Identity/"]
COPY ["Project.Job/Project.Job.csproj", "Project.Job/"]
COPY ["Project.Module.Driver/Project.Module.Driver.csproj", "Project.Module.Driver/"]
COPY ["Project.Module.User/Project.Module.User.csproj", "Project.Module.User/"]
RUN dotnet restore "Project.Api/Project.Api.csproj"
COPY . .
WORKDIR "/src/Project.Api"
RUN dotnet build "Project.Api.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "Project.Api.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "Project.Api.dll"]
And this docker-compose file:
version: "3.4"
services:
db-production:
image: "mcr.microsoft.com/mssql/server:2022-latest"
container_name: db-production
restart: always
environment:
ACCEPT_EULA: "Y"
MSSQL_SA_PASSWORD: "DbPassword123"
volumes:
- /home/volumes/mssql/data/production:/var/opt/mssql/data
ports:
- "1432:1433"
backend-production:
image: "backend_production:${TAG}"
container_name: backend-production
environment:
- "ASPNETCORE_ENVIRONMENT=Production"
build:
context: .
dockerfile: Project.Api/Dockerfile
ports:
- "${EXPOSED_PORT_PRODUCTION}:80"
restart: unless-stopped
depends_on:
- db-production
I expected it's working fine same as my local PC on docker container
This is an issue for Ef-core 7 on Git Hub. If you downgrade the version to 6.x.x, it works fine on docker.
https://github.com/dotnet/SqlClient/issues/1856

Docker Compose Two Applications Init DB

I have two entities.
My asp.net application and the database from where it takes it's data.
My Dockerfile is following:
FROM mcr.microsoft.com/dotnet/aspnet:6.0
COPY bin/Debug/net6.0/ ./
COPY /DBinit.sql ./
COPY /entrypoint.sh ./
WORKDIR ./
ENTRYPOINT ["dotnet", "Server.dll"]
EXPOSE 80/tcp
RUN chmod +x ./entrypoint.sh
CMD /bin/bash ./entrypoint.sh
My entrypoint.sh is following:
#!/bin/bash
set -e
run_cmd="dotnet run --server.urls http://*:80"
until dotnet ef database update; do
>&2 echo "SQL Server is starting up"
sleep 1
done
>&2 echo "SQL Server is up - executing command"
$run_cmd
sleep 30s
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P Your_password123 -d master -i /DBinit.sql
And my Docker Compose is:
version: "3.9"
services:
web:
build: .
ports:
- "8000:80"
depends_on:
- db
db:
image: "mcr.microsoft.com/mssql/server"
environment:
SA_PASSWORD: "Your_password123"
ACCEPT_EULA: "Y"
My application is running correctly. Both server_db and server_web are up.
My issue, is that the db itself is empty and my DBinit.sql script is not getting executed.
From where I can see. My problem is that this DBinit.sql file is not getting copied inside the server_db.
If I will open up the ls -la of the / in server_web. This init script is there (but it shouldn't even be as part of my web app)
While if I will do the same inside server_db. There is no sql script in it.
And I am getting really confused with all of these Docker entities.
Could someone point me to a solution ?
You cannot use /opt/mssql-tools/bin/sqlcmd from your asp.net container.
I recommend you to do the next:
Create separate Dockerfile for your DB (name it Dockerfile.db) and remove related from your app's Dockerfile:
FROM mcr.microsoft.com/mssql/server
COPY /DBinit.sql /
COPY /db_entrypoint.sh /
WORKDIR /
# you may chmod db_entrypoint.sh on your host system so you will not need this line at all
RUN chmod +x /db_entrypoint.sh
ENTRYPOINT /db_entrypoint.sh & /opt/mssql/bin/sqlservr
Move DB-related stuff to another entrypoint (let's name it db_entrypoint.sh):
#!/bin/bash
sleep 30s
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P "$SA_PASSWORD" -d master -i /DBinit.sql
(note that I've replaced Your_password123 with env variable)
Prepare your docker-compose.yml:
version: "3.9"
services:
web:
build: .
ports:
- "8000:80"
depends_on:
- db
db:
build:
context: .
dockerfile: Dockerfile.db
environment:
SA_PASSWORD: "Your_password123"
ACCEPT_EULA: "Y"
That's all. Check it out — now you may be able to init your DB successfully

Can't connect to frontend in Dockerized React web app on MacOS

I was recently hired on a website development project and I am having trouble deploying it via docker on MacOS. I just can't connect to the frontend via localhost:8000.
I have temporarily solved this problem by running docker in a virtual machine (Ubuntu), but some things are not working correctly due to this connection.
What are the ways to solve this problem?
Here is the config in dockerfiles:
Dockerfile (frontend)
# pull official base image
FROM node:12-alpine as build
# create and set working directory
WORKDIR /app
# install app dependencies
RUN mkdir frontend
COPY package.json ./frontend/
COPY package-lock.json ./frontend/
COPY frontend_healthchecker.sh ./frontend/
RUN chmod +x /app/frontend/frontend_healthchecker.sh
RUN ls
RUN cd frontend && ls
RUN cd frontend && npm install --only=prod
# RUN cd frontend && npm install --scripts-prepend-node-path=auto
# add app code
COPY . ./frontend/
RUN cd frontend && ls
# start app
RUN cd frontend && npm install && npm run build
FROM nginx:1.16.0-alpine
COPY --from=build /app/frontend/build /usr/share/nginx/html
COPY --from=build /app/frontend/nginx/default.conf /etc/nginx/conf.d/default.conf
COPY --from=build /app/frontend/frontend_healthchecker.sh /home
RUN chmod +x /home/frontend_healthchecker.sh
RUN apk add curl
RUN apk search curl
HEALTHCHECK CMD ["/bin/sh", "/home/frontend_healthchecker.sh"]
EXPOSE 8000
CMD ["nginx", "-g", "daemon off;"]
Dockerfile (backend)
FROM node:12
ENV TZ=Europe/Moscow
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
#install ffmpeg
RUN apt-get -y update
RUN apt-get -y upgrade
RUN apt-get install -y ffmpeg
RUN apt-get install -y mediainfo
RUN apt-get -y update
RUN apt-get -y upgrade
RUN apt-get install -y python-pip
RUN pip --version
RUN pip install numpy
RUN pip install opencv-contrib-python
RUN pip install urllib3
WORKDIR /app
COPY . /app
COPY backend_healthchecker.sh /app
RUN chmod +x backend_healthchecker.sh
RUN ls
RUN npm install
EXPOSE 8080
WORKDIR /app/bin
HEALTHCHECK CMD ../backend_healthchecker.sh
ENTRYPOINT ["node"]
CMD ["www"]
docker-compose.yaml
version: '3.3'
services:
kurento:
network_mode: "host"
build:
context: ./kurento
volumes:
- "/etc/timezone:/etc/timezone:ro"
- "/etc/localtime:/etc/localtime:ro"
- static-content:/tmp
container_name: p_kurento
environment:
- GST_DEBUG=2,Kurento*:5
mongo:
network_mode: "host"
image: mongo:latest
restart: always
container_name: p_mongo
volumes:
- "/etc/timezone:/etc/timezone:ro"
- "/etc/localtime:/etc/localtime:ro"
- db-content:/data/db
healthcheck:
test: mongo localhost:27017/test | mongo show dbs
interval: 1m
timeout: 15m
retries: 5
backend:
network_mode: "host"
env_file:
- ./backend/.env
build:
context: ./backend
container_name: p_backend
volumes:
- "/etc/timezone:/etc/timezone:ro"
- "/etc/localtime:/etc/localtime:ro"
- static-content:/files
frontend:
network_mode: "host"
env_file:
- ./frontend/.env
build:
context: ./frontend
container_name: p_frontend
volumes:
- "/etc/timezone:/etc/timezone:ro"
- "/etc/localtime:/etc/localtime:ro"
coturn:
network_mode: "host"
build:
context: ./stun
container_name: p_turn
portainer:
#network_mode: "host"
restart: always
image: portainer/portainer
command: --admin-password=somepassword -H unix:///var/run/docker.sock
ports:
- "9000:9000"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
container_name: portainer
volumes:
static-content:
db-content:
network_mode: host doesn't work on MacOS or Windows systems. The Docker Use host networking documentation notes:
The host networking driver only works on Linux hosts, and is not supported on Docker Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows Server.
It also essentially entirely disables Docker's networking stack, and is almost never necessary.
You need to do three things here:
Remove all of the network_mode: host lines from the Compose file. (The container_name: lines are also unnecessary.)
For any of the services you need to access from outside Docker (could be all of them and that's fine) add ports: to publish their container ports.
When any of these services internally call other services configure their URLs (for example, in the .env files) to use their Compose service names. (See Networking in Compose in the Docker documentation.) (Also note that your frontend application probably actually runs in a browser, even if the code is served from a container, and can't use these host names at all; this specific point needs to still use localhost or the host name where the system will eventually be deployed.)
So, for example, the setup for the frontend and backend containers could look like:
version: '3.8'
services:
mongo: { ... }
backend:
# no network_mode: host or container_name:
# if this needs to be accessed from the browser application
ports:
- '8080:8080'
# probably actually put this in .env
environment:
- MONGO_HOST=mongo
# unchanged from the original
env_file:
- ./backend/.env
build:
context: ./backend
volumes:
- "/etc/timezone:/etc/timezone:ro"
- "/etc/localtime:/etc/localtime:ro"
- static-content:/files
frontend:
ports:
- '8000:8000'
environment:
- BACKEND_URL=http://localhost:8080 # as would be seen from the browser
env_file:
- ./frontend/.env
build:
context: ./frontend
volumes:
- "/etc/timezone:/etc/timezone:ro"
- "/etc/localtime:/etc/localtime:ro"
You can try to create a loopback address and use that instead of default localhost.
Steps:
create a file named loopback-alias.plist in your current directory and use the below content
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist>
<plist version="1.0">
<dict>
<key>Label</key>
<string>loopback-alias</string>
<key>ProgramArguments</key>
<array>
<string>/sbin/ifconfig</string>
<string>lo0</string>
<string>alias</string>
<string>10.200.10.1</string>
<string>netmask</string>
<string>255.255.255.0</string>
</array>
<key>RunAtLoad</key>
<true/>
</dict>
</plist>
You can change the loop-back address.
Copy the file to /Library/LaunchDaemons/
Restart your network
Sample code to use this in Docker-compose
redis:
image: redis
expose:
- '6379'
ports:
- '10.200.10.1:6379:6379'
extra_hosts:
- 'localhost:10.200.10.1'
You can check the following link for more details.
https://blog.sbaranidharan.online/index.php/2021/05/05/docker-macos-expose-container-ports-to-host-machine/

Asp.Net Core + SQL Server on Docker - sleep for startup DB

I have noticed that when I try to run docker-compose up command for the first time I'm getting an error:
Starting mssql ...
Starting mssql ... done
Recreating api ...
Recreating api ... done
Attaching to mssql, api
api exited with code 1
Because api try to get data from DB but the MSSQL has not been started yet.
So, my question is it possible to somehow wait for DB and after that run API?
Here are my docker-compose and dockerfile
docker-compose:
version: '3.3'
services:
api:
image: api
container_name: api
build:
context: .
dockerfile: Dockerfile
ports:
- "8000:80"
depends_on:
- db
db:
image: "microsoft/mssql-server-linux"
container_name: mssql
environment:
SA_PASSWORD: "testtest3030!"
ACCEPT_EULA: "Y"
MSSQL_PID: "Express"
ports:
- "8001:1433"
dockerfile:
# Build Stage
FROM microsoft/aspnetcore-build as build-env
WORKDIR /source
COPY . .
RUN dotnet restore
RUN dotnet publish -o /publish --configuration Release
# Publish Stage
FROM microsoft/aspnetcore
WORKDIR /app
COPY --from=build-env /publish .
ENTRYPOINT ["dotnet", "Api.dll"]
I also noticed in logs:
2017-11-17 22:12:42.67 Logon Error: 18456, Severity: 14, State: 38.
2017-11-17 22:12:42.67 Logon Login failed for user 'sa'. Reason: Failed to open the explicitly specified database 'MyDb'. [CLIENT: 172.26.0.3]
You could use a simple entrypoint.sh script:
#!/bin/bash
set -e
run_cmd="dotnet your_app.dll"
sleep 10
exec $run_cmd
And docker file will change accordingly:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.0-bionic AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
COPY ["src/entrypoint.sh", ""]
RUN chmod +x entrypoint.sh
# .... here your copy/restore/build/publish
ENTRYPOINT [ "/bin/bash", "entrypoint.sh" ]

Resources