I am trying to deploy a Wordpress instance on my PI using docker. Unfortunately I am receiving an error, that the App cannot establish a DB connction.
All containers run in the bridged network. I am exposing port 80 of the APP on 8882 and the port 3306 of the DB on 3382.
A second Wordpress installation on ports 8881 (APP) and 3381 (DB) in the same network are perfectly working, where is the flaw in my setup?
version: '2.1'
services:
wordpress:
image: wordpress
network_mode: bridge
restart: always
ports:
- 8882:80
environment:
PUID: 1000
PGID: 1000
WORDPRESS_DB_HOST: [addr. of PI]:3382
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: secret
WORDPRESS_DB_NAME: wordpress
volumes:
- wordpress:/var/www/html
db:
image: ghcr.io/linuxserver/mariadb
network_mode: bridge
environment:
- PUID=1000
- PGID=1000
- MYSQL_ROOT_PASSWORD=secret
- TZ=Europe/Berlin
- MYSQL_DATABASE=wordpress
- MYSQL_USER=wordpress
- MYSQL_PASSWORD=secret #Must match the above password
volumes:
- db:/config
ports:
- 3382:3306
restart: unless-stopped
volumes:
db:
wordpress:
When containers are on the same bridge network, they can talk to each other using their service names as hostnames. In your case, the wordpress container can talk to the database container using the hostname db. Since it's not talking via the host, any port mapping is irrelevant and you just connect on port 3306.
So if you change
WORDPRESS_DB_HOST: [addr. of PI]:3382
to
WORDPRESS_DB_HOST: db
it should work.
You can remove the port mapping on the database container if you don't need to access the database directly from the host.
Ok, learned something today, like everyday.
Better to have such installations all nicely seperated in different networks and also better do not use same container names, such as DB. Better seperate them like DB-WP1, DB-WP2, etc....
In my setup, I couldn´t see any reason, why it should interfere with each other, but doing the above will not harm anything at all....
You should create network
version: '2.1'
services:
wordpress:
image: wordpress
networks:
- db_net
restart: always
ports:
- 8882:80
environment:
PUID: 1000
PGID: 1000
WORDPRESS_DB_HOST: [addr. of PI]:3382
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: secret
WORDPRESS_DB_NAME: wordpress
volumes:
- wordpress:/var/www/html
db:
image: ghcr.io/linuxserver/mariadb
networks:
- db_net
environment:
- PUID=1000
- PGID=1000
- MYSQL_ROOT_PASSWORD=secret
- TZ=Europe/Berlin
- MYSQL_DATABASE=wordpress
- MYSQL_USER=wordpress
- MYSQL_PASSWORD=secret #Must match the above password
volumes:
- db:/config
ports:
- 3382:3306
restart: unless-stopped
volumes:
db:
wordpress:
networks:
db_net:
driver: bridge
Related
new to Docker and containers in general. Trying to containerize a simple MERN-based todo list application. Locally on my PC, I can successfully send HTTP post requests from my React frontend to my Nodejs/Express backend and create a new todo item. I use the 'proxy' field in my client folder's package.json file, as shown below:
React starts up on port 3000, my API server starts up on 3001, and with the proxy field defined, all is good locally.
My issue arises when I containerize the three services (i.e. React, API server, and MongoDB). When I try to make the same fetch post request, I receive the following console error:
I will provide the code for my docker-compose file; perhaps it is useful for helping provide me a solution?
version: '3.7'
services:
client:
depends_on:
- server
build:
context: ./client
dockerfile: Dockerfile
image: jlcomp03/rajant-client
container_name: container_client
command: npm start
volumes:
- ./client/src/:/usr/app/src
- ./client/public:/usr/app/public
# - /usr/app/node_modules
ports:
- "3000:3000"
networks:
- frontend
stdin_open: true
tty: true
server:
depends_on:
- mongo
build:
context: ./server
dockerfile: Dockerfile
image: jlcomp03/rajant-server
container_name: container_server
# command: /usr/src/app/node_modules/.bin/nodemon server.js
volumes:
- ./server/src:/usr/app/src
# - /usr/src/app/node_modules
ports:
- "3001:3001"
links:
- mongo
environment:
- NODE_ENV=development
- MONGODB_CONNSTRING='mongodb://container_mongodb:27017/todo_db'
networks:
- frontend
- backend
mongo:
image: mongo
restart: always
container_name: container_mongodb
volumes:
- mongo-data:/data/db
ports:
- "27017:27017"
networks:
- backend
volumes:
mongo-data:
driver: local
node_modules:
web-root:
driver: local
networks:
backend:
driver: bridge
frontend:
My intuition tells me the issue(s) lies in some configuration parameter I am not addressing in my docker-compose.yml file? Please help!
Your proxy config won't work with containers because of its use of localhost.
The Docker bridge network docs provide some insight why:
Containers on the default bridge network can only access each other by IP addresses, unless you use the --link option, which is considered legacy. On a user-defined bridge network, containers can resolve each other by name or alias.
I'd suggest creating your own bridge network and communicating via container name or alias.
{
"proxy": "http://container_server:3001"
}
Another option is to use http://host.docker.internal:3001.
In golang-migrate's documentation, it is stated that you can run this command to run all the migrations in one folder.
docker run -v {{ migration dir }}:/migrations --network host migrate/migrate
-path=/migrations/ -database postgres://localhost:5432/database up 2
How would you do this to fit the syntax of the new docker-compose, which discourages the use of --network?
And more importantly: How would you connect to a database in another container instead to one running in your localhost?
Adding this to your docker-compose.yml will do the trick:
db:
image: postgres
networks:
new:
aliases:
- database
environment:
POSTGRES_DB: mydbname
POSTGRES_USER: mydbuser
POSTGRES_PASSWORD: mydbpwd
ports:
- "5432"
migrate:
image: migrate/migrate
networks:
- new
volumes:
- .:/migrations
command: ["-path", "/migrations", "-database", "postgres://mydbuser:mydbpwd#database:5432/mydbname?sslmode=disable", "up", "3"]
links:
- db
networks:
new:
Instead of using the --network host option of docker run you set up a network called new. All the services inside that network gain access to each other through a defined alias (in the above example, you can access the db service through the database alias). Then, you can use that alias just like you would use localhost, that is, in place of an IP address. That explains this connection string:
"postgres://mydbuser:mydbpwd#database:5432/mydbname?sslmode=disable"
The answer provided by #Federico work for me at the beginning, nevertheless, I realised that I've been gotten a connect: connection refused the first time the docker-compose was run in brand new environment, but not the second one. This means that the migrate container runs before the Database is ready to process operations. Since, migrate/migrate from docker-hub runs the "migration" command whenever is ran, it's not possible to add a wait_for_it.sh script to wait for the db to be ready. So we have to add depends_on and a healthcheck tags to manage the order execution.
So this is my docker file:
version: '3.3'
services:
db:
image: postgres
networks:
new:
aliases:
- database
environment:
POSTGRES_DB: mydbname
POSTGRES_USER: mydbuser
POSTGRES_PASSWORD: mydbpwd
ports:
- "5432"
healthcheck:
test: pg_isready -U mydbuser -d mydbname
interval: 10s
timeout: 3s
retries: 5
migrate:
image: migrate/migrate
networks:
- new
volumes:
- .:/migrations
command: ["-path", "/migrations", "-database", "postgres://mydbuser:mydbpwd#database:5432/mydbname?sslmode=disable", "up", "3"]
links:
- db
depends_on:
- db
networks:
new:
As of Compose file formats version 2 you do not have to setup a network.
As stated in the docker networking documentation By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
So in you case you could do something like:
version: '3.8'
services:
#note this databaseservice name is what we will use instead
#of localhost when using migrate as compose assigns
#the service name as host
#for example if we had another container in the same compose
#that wnated to access this service port 2000 we would have written
# databaseservicename:2000
databaseservicename:
image: postgres:13.3-alpine
restart: always
ports:
- "5432"
environment:
POSTGRES_PASSWORD: password
POSTGRES_USER: username
POSTGRES_DB: database
volumes:
- pgdata:/var/lib/postgresql/data
#if we had another container that wanted to access migrate container at let say
#port 1000
#and it's in the same compose file we would have written migrate:1000
migrate:
image: migrate/migrate
depends_on:
- databaseservicename
volumes:
- path/to/you/migration/folder/in/local/computer:/database
# here instead of localhost as the host we use databaseservicename as that is the name we gave to the postgres service
command:
[ "-path", "/database", "-database", "postgres://databaseusername:databasepassword#databaseservicename:5432/database?sslmode=disable", "up" ]
volumes:
pgdata:
I have been researching how to connect multiple docker containers in the same compose file to a database (MySQL/MariaDB) on the local host. Currently, the database is containerized for development but production requires a separate database. Eventually, the database will be deployed to AWS or Azure.
There are lots of similar questions on SO, but none that seem to address this particular situation.
Given the existing docker-compose.yml
version: '3.1'
services:
db:
build:
image: mariadb:10.3
volumes:
- "~/data/lib/mysql:/var/lib/mysql:Z"
api:
image: t-api:latest
depends_on:
- db
web:
image: t-web:latest
scan:
image: t-scan:latest
proxy:
build:
context: .
dockerfile: nginx.Dockerfile
image: t-proxy
depends_on:
- web
ports:
- 80:80
All these services are reversed proxied behind nginx, with both api and scan services requiring access to the database. There are other services requiring database access not shown for simpliticy.
The production compose file would be:
version: '3.1'
api:
image: t-api:latest
depends_on:
- db
web:
image: t-web:latest
scan:
image: t-scan:latest
proxy:
build:
context: .
dockerfile: nginx.Dockerfile
image: t-proxy
depends_on:
- web
ports:
- 80:80
If there was a single container requiring database access, I could just open up the ports 3306:3306, which won't work for multiple containers.
Splitting up the containers breaks the reverse proxy and add's complexity to deployment and management. I've tried extra_hosts
extra_hosts:
- myhost: xx.xx.xx.xx
but this generate EAI_AGAIN DNS errors, which is strange because you can ping the host from inside containers. I realize this may not be possible
I've developed an Angular App, that communicates with an UWSGI Flask Api throught Nginx. Currently I've 3 containers(Angular [web_admin], Api [api_admin], Nginx[nginx])
When I'm running it in my development machine, the communication is working alright. The angular requests goes through the url: http://localhost:5000 and the api response well, everything is working well.
But when I deployed it to my Production Server, I noticed that the application is not working, because the port 5000 is not opened in my firewall.
My question is kind simple, how do I make the angular container, call the api container, through internal network, instead of calling it from the external?
version: '2'
services:
data:
build: data
neo4j:
image: neo4j:3.0
networks:
- back
volumes_from:
- data
ports:
- "7474:7474"
- "7473:7473"
- "7687:7687"
volumes:
- /var/diariooficial/neo4j/data:/data
web_admin:
build: frontend/web
networks:
- front
- back
ports:
- "8001:8001"
depends_on:
- api_admin
links:
- "api_admin:api_admin"
volumes:
- /var/diariooficial/upload/diario_oficial/:/var/diariooficial/upload/diario_oficial/
api_admin:
build: backend/api
volumes_from:
- data
networks:
- back
ports:
- "5000:5000"
depends_on:
- neo4j
- neo4jtest
volumes:
- /var/diariooficial/upload/diario_oficial/:/var/diariooficial/upload/diario_oficial/
nginx:
build: nginx
volumes_from:
- data
networks:
- back
- front
ports:
- "80:80"
- "443:443"
volumes:
- /var/diariooficial/log/nginx:/var/log/nginx
depends_on:
- api_admin
- web_admin
networks:
front:
back:
Links create DNS names on the network for the services. You should have the web_admin service talk to api_admin:5000 instead of localhost:5000. The api_admin DNS name will resolve to the IP address of one of the api_admin service.
See https://docs.docker.com/compose/networking/ for an explanation, specifically:
Each container can now look up the hostname web or db and get back the appropriate container’s IP address. For example, web’s application code could connect to the URL postgres://db:5432 and start using the Postgres database.
My application has three containers: db, frontend-web(React), backend-api
How can I get my backend-api address in frontend-web
Here is my compose file
version: '2'
services:
db:
image: postgres
ports:
- "5432:5432"
web:
build: .
stdin_open: true
volumes:
- .:/usr/src/app
ports:
- "3000:3000"
environment:
- API_URL=http://api:8080/
links:
- api
depends_on:
- api
api:
build: ./api
stdin_open: true
volumes:
- ./api:/usr/src/app
ports:
- "8080:3000"
links:
- db
depends_on:
- db
I can't get the address both api and process.env.API_URL
Add the container name to the service description as follows:
api:
build: ./api
container_name: api
stdin_open: true
volumes:
- ./api:/usr/src/app
ports:
- "8080:3000"
links:
- db
depends_on:
- db
You can then use the container name as a host name to connect to. See https://docs.docker.com/compose/compose-file/#/containername
I am admitting that your server at the web container is just providing the static htmls and not working as a proxy for the api container server.
So, once that you mapped the ports to the host machine, you could use the host machines name/ip to find the api server.
If your host machine name is app.myserver.dev, in your API_URL env var you can use the config below and docker will do the work for you:
web:
build: .
stdin_open: true
volumes:
- .:/usr/src/app
ports:
- "3000:3000"
environment:
- API_URL=http://app.myserver.dev:8080/
links:
- api
depends_on:
- api