I have been researching how to connect multiple docker containers in the same compose file to a database (MySQL/MariaDB) on the local host. Currently, the database is containerized for development but production requires a separate database. Eventually, the database will be deployed to AWS or Azure.
There are lots of similar questions on SO, but none that seem to address this particular situation.
Given the existing docker-compose.yml
version: '3.1'
services:
db:
build:
image: mariadb:10.3
volumes:
- "~/data/lib/mysql:/var/lib/mysql:Z"
api:
image: t-api:latest
depends_on:
- db
web:
image: t-web:latest
scan:
image: t-scan:latest
proxy:
build:
context: .
dockerfile: nginx.Dockerfile
image: t-proxy
depends_on:
- web
ports:
- 80:80
All these services are reversed proxied behind nginx, with both api and scan services requiring access to the database. There are other services requiring database access not shown for simpliticy.
The production compose file would be:
version: '3.1'
api:
image: t-api:latest
depends_on:
- db
web:
image: t-web:latest
scan:
image: t-scan:latest
proxy:
build:
context: .
dockerfile: nginx.Dockerfile
image: t-proxy
depends_on:
- web
ports:
- 80:80
If there was a single container requiring database access, I could just open up the ports 3306:3306, which won't work for multiple containers.
Splitting up the containers breaks the reverse proxy and add's complexity to deployment and management. I've tried extra_hosts
extra_hosts:
- myhost: xx.xx.xx.xx
but this generate EAI_AGAIN DNS errors, which is strange because you can ping the host from inside containers. I realize this may not be possible
Related
new to Docker and containers in general. Trying to containerize a simple MERN-based todo list application. Locally on my PC, I can successfully send HTTP post requests from my React frontend to my Nodejs/Express backend and create a new todo item. I use the 'proxy' field in my client folder's package.json file, as shown below:
React starts up on port 3000, my API server starts up on 3001, and with the proxy field defined, all is good locally.
My issue arises when I containerize the three services (i.e. React, API server, and MongoDB). When I try to make the same fetch post request, I receive the following console error:
I will provide the code for my docker-compose file; perhaps it is useful for helping provide me a solution?
version: '3.7'
services:
client:
depends_on:
- server
build:
context: ./client
dockerfile: Dockerfile
image: jlcomp03/rajant-client
container_name: container_client
command: npm start
volumes:
- ./client/src/:/usr/app/src
- ./client/public:/usr/app/public
# - /usr/app/node_modules
ports:
- "3000:3000"
networks:
- frontend
stdin_open: true
tty: true
server:
depends_on:
- mongo
build:
context: ./server
dockerfile: Dockerfile
image: jlcomp03/rajant-server
container_name: container_server
# command: /usr/src/app/node_modules/.bin/nodemon server.js
volumes:
- ./server/src:/usr/app/src
# - /usr/src/app/node_modules
ports:
- "3001:3001"
links:
- mongo
environment:
- NODE_ENV=development
- MONGODB_CONNSTRING='mongodb://container_mongodb:27017/todo_db'
networks:
- frontend
- backend
mongo:
image: mongo
restart: always
container_name: container_mongodb
volumes:
- mongo-data:/data/db
ports:
- "27017:27017"
networks:
- backend
volumes:
mongo-data:
driver: local
node_modules:
web-root:
driver: local
networks:
backend:
driver: bridge
frontend:
My intuition tells me the issue(s) lies in some configuration parameter I am not addressing in my docker-compose.yml file? Please help!
Your proxy config won't work with containers because of its use of localhost.
The Docker bridge network docs provide some insight why:
Containers on the default bridge network can only access each other by IP addresses, unless you use the --link option, which is considered legacy. On a user-defined bridge network, containers can resolve each other by name or alias.
I'd suggest creating your own bridge network and communicating via container name or alias.
{
"proxy": "http://container_server:3001"
}
Another option is to use http://host.docker.internal:3001.
I am trying to deploy a Wordpress instance on my PI using docker. Unfortunately I am receiving an error, that the App cannot establish a DB connction.
All containers run in the bridged network. I am exposing port 80 of the APP on 8882 and the port 3306 of the DB on 3382.
A second Wordpress installation on ports 8881 (APP) and 3381 (DB) in the same network are perfectly working, where is the flaw in my setup?
version: '2.1'
services:
wordpress:
image: wordpress
network_mode: bridge
restart: always
ports:
- 8882:80
environment:
PUID: 1000
PGID: 1000
WORDPRESS_DB_HOST: [addr. of PI]:3382
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: secret
WORDPRESS_DB_NAME: wordpress
volumes:
- wordpress:/var/www/html
db:
image: ghcr.io/linuxserver/mariadb
network_mode: bridge
environment:
- PUID=1000
- PGID=1000
- MYSQL_ROOT_PASSWORD=secret
- TZ=Europe/Berlin
- MYSQL_DATABASE=wordpress
- MYSQL_USER=wordpress
- MYSQL_PASSWORD=secret #Must match the above password
volumes:
- db:/config
ports:
- 3382:3306
restart: unless-stopped
volumes:
db:
wordpress:
When containers are on the same bridge network, they can talk to each other using their service names as hostnames. In your case, the wordpress container can talk to the database container using the hostname db. Since it's not talking via the host, any port mapping is irrelevant and you just connect on port 3306.
So if you change
WORDPRESS_DB_HOST: [addr. of PI]:3382
to
WORDPRESS_DB_HOST: db
it should work.
You can remove the port mapping on the database container if you don't need to access the database directly from the host.
Ok, learned something today, like everyday.
Better to have such installations all nicely seperated in different networks and also better do not use same container names, such as DB. Better seperate them like DB-WP1, DB-WP2, etc....
In my setup, I couldnĀ“t see any reason, why it should interfere with each other, but doing the above will not harm anything at all....
You should create network
version: '2.1'
services:
wordpress:
image: wordpress
networks:
- db_net
restart: always
ports:
- 8882:80
environment:
PUID: 1000
PGID: 1000
WORDPRESS_DB_HOST: [addr. of PI]:3382
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: secret
WORDPRESS_DB_NAME: wordpress
volumes:
- wordpress:/var/www/html
db:
image: ghcr.io/linuxserver/mariadb
networks:
- db_net
environment:
- PUID=1000
- PGID=1000
- MYSQL_ROOT_PASSWORD=secret
- TZ=Europe/Berlin
- MYSQL_DATABASE=wordpress
- MYSQL_USER=wordpress
- MYSQL_PASSWORD=secret #Must match the above password
volumes:
- db:/config
ports:
- 3382:3306
restart: unless-stopped
volumes:
db:
wordpress:
networks:
db_net:
driver: bridge
I have a SQL Server running inside a Docker container. It is setup via docker-compose file and tied to my .NET Core API:
version: '3.4'
services:
appdb:
image: mcr.microsoft.com/mssql/server
container_name: ${SQL_SERVER_CONTAINER_NAME}
restart: unless-stopped
ports:
- "${SQL_SERVER_HOST_PORT}:1433"
environment:
SA_PASSWORD: ${SQL_SERVER_USER_PASSWORD}
ACCEPT_EULA: "Y"
volumes:
- ./../../.microsoft/data:/var/opt/mssql/data
- ./../../.microsoft/log:/var/opt/mssql/log
- ./../../.microsoft/secrets:/var/opt/mssql/secrets
demoapi.website:
image: ${DOCKER_REGISTRY-}demoapiwebsite
build:
context: .
dockerfile: DemoAPI.Website/Dockerfile
container_name: ${APP_NAME}-app
environment:
- ASPNETCORE_ENVIRONMENT=Development
- "ConnectionStrings__AppConnectionString=Server=${SQL_SERVER_CONTAINER_NAME},${SQL_SERVER_HOST_PORT}; Initial Catalog=${APP_NAME};User ID=${SQL_SERVER_USER_ID};Password=${SQL_SERVER_USER_PASSWORD}"
depends_on:
- appdb
ports:
- "8000:80"
I also have a bunch of SQL script files inside my .NET Core API project. These scripts basically define how to create and seed the database when the project is started.
What I would like to do is to execute these scripts somehow once the dockerized SQL Server is up and running. Is it possible to do this via the docker-compose file?
I see a lot of tutorials which use the actual .NET app and do this during the app start up with EF migrations, but I would prefer to get this done via docker-compose, if possible?
So, I'm stuck in an issue related to using files stored in a server and not able to display them in the frontend.
My project is:
React + Redux using Docker
The React app is full, i.e., there's an API folder for the backend (react/redux), a CLIENT folder for the frontend (react/libraries) and MongoDB as DB.
Docker Compose creates these 3 parts, API, CLIENT and MONGO in just 1 container.
So, in the frontend, the user is able to select an image as an avatar, and then, this image is sent through the layers and saved in a specific folder (NOT BUILD/PUBLIC etc) inside the API docker image. It's possible to remove/delete and re-select it. Everything's working fine!
The issue is the display of this image in the frontend. The avatar component uses an IMAGE SRC to display it, but I can't find a valid URL to use to be able for the frontend TO SEE that image file saved in the API/server.
Since it's inside a container, I tried all possibilities I could find in Docker documentation... I think the solution relays in the NETWORK docker-compose option, but even though couldn't make it.
Docker Compose File:
version: '3.8'
services:
client:
build: ./client
stdin_open: true
image: my-client
restart: always
ports:
- "3000:3000"
volumes:
- ./client:/client
- /client/node_modules
depends_on:
- api
networks:
mynetwork:
ipv4_address: 172.19.0.9
api:
build: ./api
image: my-api
restart: always
ports:
- "3003:3003"
volumes:
- ./api:/api
- logs:/api/logs
- /api/node_modules
depends_on:
- mongo
networks:
mynetwork:
ipv4_address: 172.19.0.10
mongo:
image: mongo
restart: always
ports:
- "27017:27017"
volumes:
- mongo_data:/data/db
networks:
- mynetwork
volumes:
mongo_data:
logs:
networks:
mynetwork:
driver: bridge
ipam:
config:
- subnet: "172.19.0.0/24"
To summarize, there's a folder in the API side with images/files and I want to reference them as in
<img src="mynetwork:3003/imagefolder/imagefile.png"> or something like that...
I can't believe I have to use this other solution...Another Stackoverflow Reply
In golang-migrate's documentation, it is stated that you can run this command to run all the migrations in one folder.
docker run -v {{ migration dir }}:/migrations --network host migrate/migrate
-path=/migrations/ -database postgres://localhost:5432/database up 2
How would you do this to fit the syntax of the new docker-compose, which discourages the use of --network?
And more importantly: How would you connect to a database in another container instead to one running in your localhost?
Adding this to your docker-compose.yml will do the trick:
db:
image: postgres
networks:
new:
aliases:
- database
environment:
POSTGRES_DB: mydbname
POSTGRES_USER: mydbuser
POSTGRES_PASSWORD: mydbpwd
ports:
- "5432"
migrate:
image: migrate/migrate
networks:
- new
volumes:
- .:/migrations
command: ["-path", "/migrations", "-database", "postgres://mydbuser:mydbpwd#database:5432/mydbname?sslmode=disable", "up", "3"]
links:
- db
networks:
new:
Instead of using the --network host option of docker run you set up a network called new. All the services inside that network gain access to each other through a defined alias (in the above example, you can access the db service through the database alias). Then, you can use that alias just like you would use localhost, that is, in place of an IP address. That explains this connection string:
"postgres://mydbuser:mydbpwd#database:5432/mydbname?sslmode=disable"
The answer provided by #Federico work for me at the beginning, nevertheless, I realised that I've been gotten a connect: connection refused the first time the docker-compose was run in brand new environment, but not the second one. This means that the migrate container runs before the Database is ready to process operations. Since, migrate/migrate from docker-hub runs the "migration" command whenever is ran, it's not possible to add a wait_for_it.sh script to wait for the db to be ready. So we have to add depends_on and a healthcheck tags to manage the order execution.
So this is my docker file:
version: '3.3'
services:
db:
image: postgres
networks:
new:
aliases:
- database
environment:
POSTGRES_DB: mydbname
POSTGRES_USER: mydbuser
POSTGRES_PASSWORD: mydbpwd
ports:
- "5432"
healthcheck:
test: pg_isready -U mydbuser -d mydbname
interval: 10s
timeout: 3s
retries: 5
migrate:
image: migrate/migrate
networks:
- new
volumes:
- .:/migrations
command: ["-path", "/migrations", "-database", "postgres://mydbuser:mydbpwd#database:5432/mydbname?sslmode=disable", "up", "3"]
links:
- db
depends_on:
- db
networks:
new:
As of Compose file formats version 2 you do not have to setup a network.
As stated in the docker networking documentation By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
So in you case you could do something like:
version: '3.8'
services:
#note this databaseservice name is what we will use instead
#of localhost when using migrate as compose assigns
#the service name as host
#for example if we had another container in the same compose
#that wnated to access this service port 2000 we would have written
# databaseservicename:2000
databaseservicename:
image: postgres:13.3-alpine
restart: always
ports:
- "5432"
environment:
POSTGRES_PASSWORD: password
POSTGRES_USER: username
POSTGRES_DB: database
volumes:
- pgdata:/var/lib/postgresql/data
#if we had another container that wanted to access migrate container at let say
#port 1000
#and it's in the same compose file we would have written migrate:1000
migrate:
image: migrate/migrate
depends_on:
- databaseservicename
volumes:
- path/to/you/migration/folder/in/local/computer:/database
# here instead of localhost as the host we use databaseservicename as that is the name we gave to the postgres service
command:
[ "-path", "/database", "-database", "postgres://databaseusername:databasepassword#databaseservicename:5432/database?sslmode=disable", "up" ]
volumes:
pgdata: