I have two entities.
My asp.net application and the database from where it takes it's data.
My Dockerfile is following:
FROM mcr.microsoft.com/dotnet/aspnet:6.0
COPY bin/Debug/net6.0/ ./
COPY /DBinit.sql ./
COPY /entrypoint.sh ./
WORKDIR ./
ENTRYPOINT ["dotnet", "Server.dll"]
EXPOSE 80/tcp
RUN chmod +x ./entrypoint.sh
CMD /bin/bash ./entrypoint.sh
My entrypoint.sh is following:
#!/bin/bash
set -e
run_cmd="dotnet run --server.urls http://*:80"
until dotnet ef database update; do
>&2 echo "SQL Server is starting up"
sleep 1
done
>&2 echo "SQL Server is up - executing command"
$run_cmd
sleep 30s
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P Your_password123 -d master -i /DBinit.sql
And my Docker Compose is:
version: "3.9"
services:
web:
build: .
ports:
- "8000:80"
depends_on:
- db
db:
image: "mcr.microsoft.com/mssql/server"
environment:
SA_PASSWORD: "Your_password123"
ACCEPT_EULA: "Y"
My application is running correctly. Both server_db and server_web are up.
My issue, is that the db itself is empty and my DBinit.sql script is not getting executed.
From where I can see. My problem is that this DBinit.sql file is not getting copied inside the server_db.
If I will open up the ls -la of the / in server_web. This init script is there (but it shouldn't even be as part of my web app)
While if I will do the same inside server_db. There is no sql script in it.
And I am getting really confused with all of these Docker entities.
Could someone point me to a solution ?
You cannot use /opt/mssql-tools/bin/sqlcmd from your asp.net container.
I recommend you to do the next:
Create separate Dockerfile for your DB (name it Dockerfile.db) and remove related from your app's Dockerfile:
FROM mcr.microsoft.com/mssql/server
COPY /DBinit.sql /
COPY /db_entrypoint.sh /
WORKDIR /
# you may chmod db_entrypoint.sh on your host system so you will not need this line at all
RUN chmod +x /db_entrypoint.sh
ENTRYPOINT /db_entrypoint.sh & /opt/mssql/bin/sqlservr
Move DB-related stuff to another entrypoint (let's name it db_entrypoint.sh):
#!/bin/bash
sleep 30s
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P "$SA_PASSWORD" -d master -i /DBinit.sql
(note that I've replaced Your_password123 with env variable)
Prepare your docker-compose.yml:
version: "3.9"
services:
web:
build: .
ports:
- "8000:80"
depends_on:
- db
db:
build:
context: .
dockerfile: Dockerfile.db
environment:
SA_PASSWORD: "Your_password123"
ACCEPT_EULA: "Y"
That's all. Check it out — now you may be able to init your DB successfully
Related
When spinning up a new SQL Server 2022 (or 2019) Docker container using -h or --hostname
docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=dev1234#" -p 1633:1433 --name sql22new --hostname sql22new -d mcr.microsoft.com/mssql/server:2022-latest
the value for ##SERVERNAME is (as expected) sql22new.
When running from docker compose, then the value for ##SERVERNAME is buildkitsandbox.
Did anyone else came across this and solved it and would like to help?
I need to have the value of ##SERVERNAME show the hostname set correctly by compose.
If we look inside a container, in /etc/hosts the values are ok and also in /etc/hostname.
Details
Dockerfile - image is called sqlha
FROM mcr.microsoft.com/mssql/server:2022-latest
ENV ACCEPT_EULA=Y
ENV SA_PASSWORD=dev1234#
ENV MSSQL_PID=Developer
ENV MSSQL_TCP_PORT=1433
ENV MSSQL_AGENT_ENABLED=True
ENV MSSQL_ENABLE_HADR=1
WORKDIR /src
RUN mkdir /var/opt/mssql/backup/
RUN mkdir /tmp/certificates/
RUN mkdir /tmp/scripts/
COPY ./cert/* /tmp/certificates/
COPY *.sql /tmp/scripts/
RUN (/opt/mssql/bin/sqlservr --accept-eula & ) | grep -q "Service Broker manager has started" && /opt/mssql-tools/bin/sqlcmd -S127.0.0.1 -Usa -Pdev1234# -i /tmp/scripts/setup.sql
docker-compose.yml
version: '3.9'
networks:
db-server-network:
ipam:
driver: default
config:
- subnet: "172.16.238.0/24"
services:
db1:
container_name: sqlNode1
image: sqlha
hostname: sqlNode1
ports:
- "1501:1433"
extra_hosts:
- "sqlNode2:172.16.238.12"
- "sqlNode3:172.16.238.13"
networks:
db-server-network:
ipv4_address: 172.16.238.11
aliases:
- sqlNode1
db2:
container_name: sqlNode2
image: sqlha
hostname: sqlNode2
ports:
- "1502:1433"
extra_hosts:
- "sqlNode1:172.16.238.11"
- "sqlNode3:172.16.238.13"
networks:
db-server-network:
ipv4_address: 172.16.238.12
aliases:
- sqlNode2
db3:
container_name: sqlNode3
image: sqlha
hostname: sqlNode3
ports:
- "1503:1433"
extra_hosts:
- "sqlNode1:172.16.238.11"
- "sqlNode2:172.16.238.12"
networks:
db-server-network:
ipv4_address: 172.16.238.13
aliases:
- sqlNode3
When mounting directly:
docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=dev1234#" -p 1633:1433 --name sql22new --hostname sql22new -d mcr.microsoft.com/mssql/server:2022-latest
When mounting with compose:
sqlNode1 container inspection:
docker exec -u 0 -it sqlNode1 bash
then
apt-get update && apt-get install nano -y
nano /etc/hosts
nano /etc/hostname
I have two containers that run via docker-compose, my docker-compose looks like this
version: "3.7"
services:
mssql:
build: ./Db
ports:
- 1433:1433
planning-poker:
build: .
restart: always
env_file:
- env.list
ports:
- 80:8080
depends_on:
- mssql
Dockerfile go-app:
FROM golang:latest
RUN apt-get -y update && \
apt-get install -y net-tools && \
apt-get install -y iputils-ping && \
apt-get install -y telnet
ADD . /go/src/planning-poker
WORKDIR /go/src/planning-poker
RUN go build -o main .
ENTRYPOINT ["./main"]
Dockerfile mssql:
FROM mcr.microsoft.com/mssql/server
ENV ACCEPT_EULA=Y
ENV MSSQL_SA_PASSWORD=Yukon_900
EXPOSE 1433
USER root
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN chmod +x ./run-initialization.sh
USER mssql
CMD /bin/bash ./entrypoint.sh
I am using database initialization scripts:
for i in {1..50};
do
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P Yukon_900 -d master -i SQL_PlanningPoker.sql
if [ $? -eq 0 ]
then
echo "SQL_PlanningPoker.sql completed"
break
else
echo "not ready yet..."
sleep 1
fi
done
So is my entrypoint:
./run-initialization.sh & /opt/mssql/bin/sqlservr
The problem is that I can’t connect to mssql from the container with the golang application in any way, the connection passes from the host, I tried to connect via telnet to mssql from the go-app container on localhost 1433, 127.0.0.1 1433, 0.0.0.0 1433 but always I get an error that the connection is either reset or telnet cannot resolve the addresses.
MyProject: https://github.com/philomela/PlanningPoker/tree/master - master branch.
What am I doing wrong? thank you in advance!
try adding network and kepp all services run in same network
networks:
my-network:
services:
mssql:
build: ./Db
ports:
- 1433:1433
networks:
- my-network
planning-poker:
build: .
restart: always
env_file:
- env.list
ports:
- 80:8080
depends_on:
- mssql
networks:
- my-network
Also there is a possibility to check service is healthy with some heath check rather than just depends_on because docker may be up but SQL server would take some time to be up and running
https://docs.docker.com/engine/reference/builder/#healthcheck
The issue is resolved, I suffered for several days, but did not pay attention to port forwarding in my application, now everything is working stably!
I was recently hired on a website development project and I am having trouble deploying it via docker on MacOS. I just can't connect to the frontend via localhost:8000.
I have temporarily solved this problem by running docker in a virtual machine (Ubuntu), but some things are not working correctly due to this connection.
What are the ways to solve this problem?
Here is the config in dockerfiles:
Dockerfile (frontend)
# pull official base image
FROM node:12-alpine as build
# create and set working directory
WORKDIR /app
# install app dependencies
RUN mkdir frontend
COPY package.json ./frontend/
COPY package-lock.json ./frontend/
COPY frontend_healthchecker.sh ./frontend/
RUN chmod +x /app/frontend/frontend_healthchecker.sh
RUN ls
RUN cd frontend && ls
RUN cd frontend && npm install --only=prod
# RUN cd frontend && npm install --scripts-prepend-node-path=auto
# add app code
COPY . ./frontend/
RUN cd frontend && ls
# start app
RUN cd frontend && npm install && npm run build
FROM nginx:1.16.0-alpine
COPY --from=build /app/frontend/build /usr/share/nginx/html
COPY --from=build /app/frontend/nginx/default.conf /etc/nginx/conf.d/default.conf
COPY --from=build /app/frontend/frontend_healthchecker.sh /home
RUN chmod +x /home/frontend_healthchecker.sh
RUN apk add curl
RUN apk search curl
HEALTHCHECK CMD ["/bin/sh", "/home/frontend_healthchecker.sh"]
EXPOSE 8000
CMD ["nginx", "-g", "daemon off;"]
Dockerfile (backend)
FROM node:12
ENV TZ=Europe/Moscow
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
#install ffmpeg
RUN apt-get -y update
RUN apt-get -y upgrade
RUN apt-get install -y ffmpeg
RUN apt-get install -y mediainfo
RUN apt-get -y update
RUN apt-get -y upgrade
RUN apt-get install -y python-pip
RUN pip --version
RUN pip install numpy
RUN pip install opencv-contrib-python
RUN pip install urllib3
WORKDIR /app
COPY . /app
COPY backend_healthchecker.sh /app
RUN chmod +x backend_healthchecker.sh
RUN ls
RUN npm install
EXPOSE 8080
WORKDIR /app/bin
HEALTHCHECK CMD ../backend_healthchecker.sh
ENTRYPOINT ["node"]
CMD ["www"]
docker-compose.yaml
version: '3.3'
services:
kurento:
network_mode: "host"
build:
context: ./kurento
volumes:
- "/etc/timezone:/etc/timezone:ro"
- "/etc/localtime:/etc/localtime:ro"
- static-content:/tmp
container_name: p_kurento
environment:
- GST_DEBUG=2,Kurento*:5
mongo:
network_mode: "host"
image: mongo:latest
restart: always
container_name: p_mongo
volumes:
- "/etc/timezone:/etc/timezone:ro"
- "/etc/localtime:/etc/localtime:ro"
- db-content:/data/db
healthcheck:
test: mongo localhost:27017/test | mongo show dbs
interval: 1m
timeout: 15m
retries: 5
backend:
network_mode: "host"
env_file:
- ./backend/.env
build:
context: ./backend
container_name: p_backend
volumes:
- "/etc/timezone:/etc/timezone:ro"
- "/etc/localtime:/etc/localtime:ro"
- static-content:/files
frontend:
network_mode: "host"
env_file:
- ./frontend/.env
build:
context: ./frontend
container_name: p_frontend
volumes:
- "/etc/timezone:/etc/timezone:ro"
- "/etc/localtime:/etc/localtime:ro"
coturn:
network_mode: "host"
build:
context: ./stun
container_name: p_turn
portainer:
#network_mode: "host"
restart: always
image: portainer/portainer
command: --admin-password=somepassword -H unix:///var/run/docker.sock
ports:
- "9000:9000"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
container_name: portainer
volumes:
static-content:
db-content:
network_mode: host doesn't work on MacOS or Windows systems. The Docker Use host networking documentation notes:
The host networking driver only works on Linux hosts, and is not supported on Docker Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows Server.
It also essentially entirely disables Docker's networking stack, and is almost never necessary.
You need to do three things here:
Remove all of the network_mode: host lines from the Compose file. (The container_name: lines are also unnecessary.)
For any of the services you need to access from outside Docker (could be all of them and that's fine) add ports: to publish their container ports.
When any of these services internally call other services configure their URLs (for example, in the .env files) to use their Compose service names. (See Networking in Compose in the Docker documentation.) (Also note that your frontend application probably actually runs in a browser, even if the code is served from a container, and can't use these host names at all; this specific point needs to still use localhost or the host name where the system will eventually be deployed.)
So, for example, the setup for the frontend and backend containers could look like:
version: '3.8'
services:
mongo: { ... }
backend:
# no network_mode: host or container_name:
# if this needs to be accessed from the browser application
ports:
- '8080:8080'
# probably actually put this in .env
environment:
- MONGO_HOST=mongo
# unchanged from the original
env_file:
- ./backend/.env
build:
context: ./backend
volumes:
- "/etc/timezone:/etc/timezone:ro"
- "/etc/localtime:/etc/localtime:ro"
- static-content:/files
frontend:
ports:
- '8000:8000'
environment:
- BACKEND_URL=http://localhost:8080 # as would be seen from the browser
env_file:
- ./frontend/.env
build:
context: ./frontend
volumes:
- "/etc/timezone:/etc/timezone:ro"
- "/etc/localtime:/etc/localtime:ro"
You can try to create a loopback address and use that instead of default localhost.
Steps:
create a file named loopback-alias.plist in your current directory and use the below content
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist>
<plist version="1.0">
<dict>
<key>Label</key>
<string>loopback-alias</string>
<key>ProgramArguments</key>
<array>
<string>/sbin/ifconfig</string>
<string>lo0</string>
<string>alias</string>
<string>10.200.10.1</string>
<string>netmask</string>
<string>255.255.255.0</string>
</array>
<key>RunAtLoad</key>
<true/>
</dict>
</plist>
You can change the loop-back address.
Copy the file to /Library/LaunchDaemons/
Restart your network
Sample code to use this in Docker-compose
redis:
image: redis
expose:
- '6379'
ports:
- '10.200.10.1:6379:6379'
extra_hosts:
- 'localhost:10.200.10.1'
You can check the following link for more details.
https://blog.sbaranidharan.online/index.php/2021/05/05/docker-macos-expose-container-ports-to-host-machine/
I have noticed that when I try to run docker-compose up command for the first time I'm getting an error:
Starting mssql ...
Starting mssql ... done
Recreating api ...
Recreating api ... done
Attaching to mssql, api
api exited with code 1
Because api try to get data from DB but the MSSQL has not been started yet.
So, my question is it possible to somehow wait for DB and after that run API?
Here are my docker-compose and dockerfile
docker-compose:
version: '3.3'
services:
api:
image: api
container_name: api
build:
context: .
dockerfile: Dockerfile
ports:
- "8000:80"
depends_on:
- db
db:
image: "microsoft/mssql-server-linux"
container_name: mssql
environment:
SA_PASSWORD: "testtest3030!"
ACCEPT_EULA: "Y"
MSSQL_PID: "Express"
ports:
- "8001:1433"
dockerfile:
# Build Stage
FROM microsoft/aspnetcore-build as build-env
WORKDIR /source
COPY . .
RUN dotnet restore
RUN dotnet publish -o /publish --configuration Release
# Publish Stage
FROM microsoft/aspnetcore
WORKDIR /app
COPY --from=build-env /publish .
ENTRYPOINT ["dotnet", "Api.dll"]
I also noticed in logs:
2017-11-17 22:12:42.67 Logon Error: 18456, Severity: 14, State: 38.
2017-11-17 22:12:42.67 Logon Login failed for user 'sa'. Reason: Failed to open the explicitly specified database 'MyDb'. [CLIENT: 172.26.0.3]
You could use a simple entrypoint.sh script:
#!/bin/bash
set -e
run_cmd="dotnet your_app.dll"
sleep 10
exec $run_cmd
And docker file will change accordingly:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.0-bionic AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
COPY ["src/entrypoint.sh", ""]
RUN chmod +x entrypoint.sh
# .... here your copy/restore/build/publish
ENTRYPOINT [ "/bin/bash", "entrypoint.sh" ]
I am new to docker and I am trying to make an aplication using django-rest and angular. My current docker-compose file looks like this:
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=pirate
- POSTGRES_USER=django
- POSTGRES_PASSWORD=secreat
volumes:
- db-data:/var/lib/postgresql/data
ports:
- "5432:5432"
backend:
entrypoint: /entrypoint.sh
build: ./backend
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
healthcheck:
test: [“CMD”, “curl”, “--fail”, 'http://localhost:8000']
interval: 10s
timeout: 5s
retries: 3
frontend:
build: ./frontend
volumes:
- .:/app
healthcheck:
test: [“CMD”, “curl”, “--fail”, 'http://localhost:4200']
interval: 10s
timeout: 5s
retries: 3
ports:
- "4200:4200"
nginx:
build: ./nginx
healthcheck:
test: [“CMD”, “curl”, “--fail”, 'http://localhost']
interval: 10s
timeout: 5s
retries: 3
ports:
- "80:80"
links:
- frontend
volumes:
db-data:
And this is my angular Dockerfile:
FROM node:8.6
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN npm install
# Here starts angular cli workaround
USER node
RUN mkdir /home/node/.npm-global
ENV PATH=/home/node/.npm-global/bin:$PATH
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
RUN npm install -g #angular/cli
# Here ends
COPY . /usr/src/app
CMD ["npm", "start"]
And now this is the problem: Whenever I change sth in my angular code the docker image with angular does not reload changes. I dont know what I am doing wrong.
THe problem is related to how the filesystem works in Docker. To fix this I suggest you to either perform hot reloads (you have add EXPOSE 49153 in Dockerfile and ports - '49153:49153' in docker-compose.yml)
There are other solution like inotify or nodemon but they require that you use the --poll option when you start your application. The problem is that they keep polling the fs for changes and if the application is big your machine will be a lot slower than you'd like.
I think I found the issue. You copy the ./app in /usr/src/app but you're setting .:/app as a volume. So this means that if you get in your docker instance you'll find your application in 2 places: /app and /usr/src/app.
To fix this you should have this mapping: .:/usr/src/app
Btw, you're going to use the node_modules from your host and this might create some issues. To avoid this you can add an empty volume mapping: /usr/src/app/node_modules
If you get inside your running container, you'll find that the folder app exists twice. You can try it, by executing:
docker exec -it $instanceName /bin/sh
ls /app
ls /usr/src/app
The problem is that only the content of /app changes during your coding, while your application is currently executing the content of /usr/src/app which remains always the same.
Your frontend in the docker-compose should look like this:
frontend:
build: ./frontend
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
I came across the same issue in Docker Desktop for Windows. I know it has been a while but, anybody came here looking for a answer like me, you should do these steps.
Modify start command to "start": "ng serve --host 0.0.0.0 --poll 500" on scripts section in package.json. (Here the number 500 means that client will check every 500 milliseconds whether a change has been made, you can reduce this number. Refer this)
Make sure port 49153 is exposed in Dockerfile (use correct node version)
FROM node:10.16.3-alpine
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
EXPOSE 4200 49153
CMD npm run start
Map ports and volumes in docker-compose.yml
version: "3.7"
services:
webapp:
build: .
ports:
- "4200:4200"
- "49153:49153"
volumes:
- "/app/node_modules"
- ".:/app"
After that running docker-compose up will build an image and spin up a new container which will automatically reload with new changes.