Connect docker-compose to external database - database

I have a set up of 4 containers that need to talk to each other and two of those need to connect to an external database.
I started working with composer and link everything together.
The containers are able to talk with each other without many issues, however they can't connect to the external database.
The external DB is up and running and I can easily connect to it via shell.
The docker-compose file looks like this:
version: "3"
services:
bridge:
# version => 2.1.4
build: ./lora-gateway-bridge
ports:
- "1680/udp:1700/udp"
links:
- emqtt
- redis
environment:
- MQTT_SERVER=tcp://emqtt:1883
networks:
- external
restart: unless-stopped
loraserver:
# version => 0.16.1
build: ./loraserver
links:
- redis
- emqtt
- lora-app-server
environment:
- NET_ID=010203
- REDIS_URL=redis://redis:6379
- DB_AUTOMIGRATE=true
- POSTGRES_DSN=${SQL_STRING} ###<- connection string
- BAND=EU_863_870
ports:
- "8000:8000"
restart: unless-stopped
lora-app-server:
build: ./lora-app-server
# version => 0.8.0
links:
- emqtt
- redis
volumes:
- "/opt/lora-app-server/certs:/opt/lora-app-server/certs"
environment:
- POSTGRES_DSN=${SQL_STRING} ### <- connection string
- REDIS_URL=redis://redis:6379
- NS_SERVER=loraserver:8000
- MQTT_SERVER=tcp://emqtt:1883
ports:
- "8001:8001"
- "443:8080"
restart: unless-stopped
redis:
image: redis:3.0.7-alpine
restart: unless-stopped
emqtt:
image: erlio/docker-vernemq:latest
volumes:
- ./emqttd/usernames/vmq.passwd:/etc/vernemq/vmq.passwd
ports:
- "1883:1883"
- "18083:18083"
restart: unless-stopped
It seems like they are unable to find the host where the database is running.
All the example that I see talk about a database inside the docker-compose, but I haven't quite grasp how to connect the container to an external service.

From your code I see that you need to connect to an external PostgreSQL server.
Networks
Being able to discover some resource in the network is related to which network is being used.
There is a set of network types that can be used, which simplify the setup, and there is also the option to create your own networks and add containers to them.
You have a number of types that you can choose from, the top has the most isolation possible:
closed containers = you have only the loopback inside the container but no interactions with the container virtual network and neither with the host network
bridged containers = your containers are connected through a default bridge network which is connected finally to the host network
joined containers = your containers network is the same and no isolation is present at that level (), also has connection to the host network
open containers = full access to the host network
The default type is bridge so you will have all containers using one default bridge network.
In docker-compose.yml you can choose a network type from network_mode
Because you haven't defined any network and haven't changed the network_mode, you get to use the default - bridge.
This means that your containers will join the default bridge network and every container will have access to each other and to the host network.
Therefore your problem does not reside with the container network. And you should check if PostgreSQL is accessible for remote connections. For example you can access PostgreSQL from localhost by default but you need to configure any other remote connection access rules.
You can configure your PostgreSQL instance by following this answer or this blog post.
Inspect networks
Following are some commands that might be useful in your scenario:
list your available networks with: docker network ls
inspect which container uses bridge network: docker network inspect --format "{{ json .Containers }}" bridge
inspect container networks: docker inspect --format "{{ json .NetworkSettings.Networks }}" myContainer
Testing connection
In order to test the connection you can create a container that runs psql and tries to connect to your remote PostgreSQL server, thus isolating to a minimum environment to test your case.
Dockerfile can be:
FROM ubuntu
RUN apt-get update
RUN apt-get install -y postgresql-client
ENV PGPASSWORD myPassword
CMD psql --host=10.100.100.123 --port=5432 --username=postgres -c "SELECT 'SUCCESS !!!';"
Then you can build the image with: docker build -t test-connection .
And finally you can run the container with: docker run --rm test-connection:latest
If your connection succeeds then SUCCESS !!! will be printed.
Note: connecting with localhost as in CMD psql --host=localhost --port=5432 --username=postgres -c "SELECT 'SUCCESS !!!';" will not work as the localhost from within the container is the container itself and will be different than the main host. Therefore the address needs to be one that is discoverable.
Note: if you would start your container as a closed container using docker run --rm --net none test-connection:latest, there will be no other network interface than loopback and the connection will fail. Just to show how choosing a network may influence the outcome.

Related

Docker connect to localhost on a another container

I have 2 containers one running on localhost:3001 and the other on localhost:3003
container localhost:3001 hits an endpoint thats running in the container localhost:3003
But gets a (failed)net::ERR_EMPTY_RESPONSE
I have tried multiple ways to get them to connect. Please don't tell me to setup a docker compose file. I should be able to spin up 2 containers and have them communicate with their ports.
I also tried adding a network.
docker network create my-network
Shows as: ea26d2eaf604 my-network bridge local
Then I spin up both containers with the flag --net my-network.
When container localhost:3001 hits an endpoint thats running in the container localhost:3003 again I get a (failed)net::ERR_EMPTY_RESPONSE
This is driving me crazy. What am I doing wrong here ?
I have even used shell to run a curl from localhost:3001 to localhost:3003 and I get
to curl: (7) Failed to connect to localhost port 3003 after 0 ms: Connection refused
When you place both machines on the same network, they are able to reference each other by their service/container name. (not localhost - as localhost is a local loopback on each container). Using a dedicated network would be the ideal way to connect from container to container.
I suspect what is happening in your case is that you are exposing 3001 from container a on the host, and 3003 from container b on the host. These ports are then open on the host, and from the host you can use localhost, however, from the containers to access the host you should use host.docker.internal which is a reference to the host machine, instead of using localhost
Here is some further reading about host.docker.internal
https://docs.docker.com/desktop/networking/#use-cases-and-workarounds-for-all-platforms
Update: docker-compose example
version: '3.1'
services:
express1:
image: node:latest
networks:
- my-private-network
working_dir: /express
command: node express-entrypoint.js
volumes:
- ./path-to-project1:/express
express2:
image: node:latest
networks:
- my-private-network
working_dir: /express
command: node express-entrypoint.js
volumes:
- ./path-to-project2:/express
networks:
my-private-network:
driver: bridge
ipam:
config:
- subnet: 172.31.27.0/24
Then you can reference each container by their service name, e.g: from express1 you can ping express2

MSDTC configuration issues with SQL Server in Docker (Linux) and Windows Host

I'm migrating a local SQL Server development database to run in a Linux docker container (on the same dev machine). When running my integration tests in Visual Studio 2019 on Windows, I receive MSDTC errors:
Exception thrown:
'System.Transactions.TransactionManagerCommunicationException' in
System.Data.dll An exception of type
'System.Transactions.TransactionManagerCommunicationException'
occurred in System.Data.dll but was not handled in user code
Communication with the underlying transaction manager has failed.
Here's my latest iteration of SQL Server in my docker-compose:
services:
sqlserver:
image: mcr.microsoft.com/mssql/server:2019-latest
container_name: SqlServer
restart: always
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=verySecretPassword
- MSSQL_RPC_PORT=13500
- MSSQL_DTC_TCP_PORT=51000
ports:
- "1401:1433"
- "135:13500"
- "51000:51000"
volumes:
- sqldata:/var/opt/mssql
I've tried all sorts of ways to adjust the RPC port to get this working. This is the main MS article. I've tried port 135:135 but it gives the same error. The note in the article at the bottom appears to be related to my issue.
For SQL Server outside of a container or for non-root containers, a
different ephemeral port, such as 13500, must be used in the container
and traffic to port 135 must then be routed to that port. You would
also need to configure port routing rules within the container from
the container port 135 to the ephemeral port.
Also, if you decide to map the container's port 135 to a different
port on the host, such as 13500, then you have to configure port
routing on the host. This enables the docker container to participate
in distributed transactions with the host and with other external
servers.
SQL Server 2019 containers run as a non-root user. I've tried port routing using netsh in windows... and also the MS article links to how to perform port forwarding in Ubuntu... which I'm unable to do even when logged in as root in the SQL Server container... iptables is not installed, and it doesn't let me apt-get install it?? I also updated the DTC options in windows to make it as open as possible, but it had no effect. Not sure what the secret sauce is. Hoping someone else has a similar setup that works.
Thanks for the tip on msdtc config, I got mine working with this compose:
version: '3.4'
services:
sqlserver:
image: mcr.microsoft.com/mssql/server:2019-GA-ubuntu-16.04
container_name: sqlserver
user: root
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=[yourPwd]
- MSSQL_RPC_PORT=135
- MSSQL_DTC_TCP_PORT=51000
ports:
- "1433:1433"
- "135:135"
- "51000:51000"
volumes:
- D:\DockerVolumes\sqlserver:/var/opt/mssql/data

How can I allow connections by specifying docker-compose host names in postgres's pg_hba.conf file?

I'm trying to allow a connection from one Docker container to a postgres container by specifying the host name of the client container in the server's pg_hba.conf file. Postgres's documentation indicates that a host name can be specified, rather than an IP address. Since I'm using Docker Compose to start the two containers, they should be accessible to each other by container name using Docker Compose's DNS. I don't want to open up all IP addresses for security reasons, and when I eventually add access for additional containers, it will be much easier to just specify the container name in the pg_hba.conf file rather than assign static IP addresses to each of them. However, when I attempt to do this, it fails with a message such as this:
psql: FATAL: no pg_hba.conf entry for host "192.168.208.3", user "postgres", database "postgres", SSL off
Here's a minimum reproducible example of what I'm trying to do:
I use the following Docker Compose file:
version: '3'
services:
postgresdb:
image: postgres:9.4
container_name: postgres-server
ports:
- "5432:5432"
volumes:
- "postgres-data:/var/lib/postgresql/data"
postgres-client:
image: postgres:9.4
container_name: postgres-client
depends_on:
- postgres-server
volumes:
postgres-data:
After running docker-compose up, I exec into the server container and modify the pg_hba.conf file in /var/lib/postgresql/data to look like this:
host all postgres postgres-client trust
I then restart the postgres server (docker-compose down then docker-compose up) and it loads the modified pg_hba.conf from the mounted volume.
I exec into the client container and attempt to connect to the postgres server:
docker exec -it postgres-client /bin/bash
psql -U postgres -h postgres-server postgres
This is where I get an error such as the following:
psql: FATAL: no pg_hba.conf entry for host "192.168.208.3", user "postgres", database "postgres", SSL off
I can't seem to find anything online that shows how to get this working. I've found examples where they just open up all or a range of IP addresses, but none where they get the use of a host name working. Here are some related questions and information:
https://www.postgresql.org/docs/9.4/auth-pg-hba-conf.htm
Allow docker container to connect to a local/host postgres database
https://dba.stackexchange.com/questions/212020/using-host-names-in-pg-hba-conf
Any ideas on how to get this working the way I would expect it to work using Docker Compose?
You need to add the full qualified host name of the client container in pg_hba.conf.
host all postgres postgres-client.<network_name> trust
e.g:
host all postgres postgres-client.postgreshostresolution_default trust
If no network has been defined, network_name is <project_name>_default.
By default project_name is the folder the docker-compose.yml resides.
To get the network names you may also call
docker inspect postgres-client | grep Networks -A1
or
docker network ls
to get a list of all docker networks currently defined on your docker host

How to run golang-migrate with docker-compose?

In golang-migrate's documentation, it is stated that you can run this command to run all the migrations in one folder.
docker run -v {{ migration dir }}:/migrations --network host migrate/migrate
-path=/migrations/ -database postgres://localhost:5432/database up 2
How would you do this to fit the syntax of the new docker-compose, which discourages the use of --network?
And more importantly: How would you connect to a database in another container instead to one running in your localhost?
Adding this to your docker-compose.yml will do the trick:
db:
image: postgres
networks:
new:
aliases:
- database
environment:
POSTGRES_DB: mydbname
POSTGRES_USER: mydbuser
POSTGRES_PASSWORD: mydbpwd
ports:
- "5432"
migrate:
image: migrate/migrate
networks:
- new
volumes:
- .:/migrations
command: ["-path", "/migrations", "-database", "postgres://mydbuser:mydbpwd#database:5432/mydbname?sslmode=disable", "up", "3"]
links:
- db
networks:
new:
Instead of using the --network host option of docker run you set up a network called new. All the services inside that network gain access to each other through a defined alias (in the above example, you can access the db service through the database alias). Then, you can use that alias just like you would use localhost, that is, in place of an IP address. That explains this connection string:
"postgres://mydbuser:mydbpwd#database:5432/mydbname?sslmode=disable"
The answer provided by #Federico work for me at the beginning, nevertheless, I realised that I've been gotten a connect: connection refused the first time the docker-compose was run in brand new environment, but not the second one. This means that the migrate container runs before the Database is ready to process operations. Since, migrate/migrate from docker-hub runs the "migration" command whenever is ran, it's not possible to add a wait_for_it.sh script to wait for the db to be ready. So we have to add depends_on and a healthcheck tags to manage the order execution.
So this is my docker file:
version: '3.3'
services:
db:
image: postgres
networks:
new:
aliases:
- database
environment:
POSTGRES_DB: mydbname
POSTGRES_USER: mydbuser
POSTGRES_PASSWORD: mydbpwd
ports:
- "5432"
healthcheck:
test: pg_isready -U mydbuser -d mydbname
interval: 10s
timeout: 3s
retries: 5
migrate:
image: migrate/migrate
networks:
- new
volumes:
- .:/migrations
command: ["-path", "/migrations", "-database", "postgres://mydbuser:mydbpwd#database:5432/mydbname?sslmode=disable", "up", "3"]
links:
- db
depends_on:
- db
networks:
new:
As of Compose file formats version 2 you do not have to setup a network.
As stated in the docker networking documentation By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
So in you case you could do something like:
version: '3.8'
services:
#note this databaseservice name is what we will use instead
#of localhost when using migrate as compose assigns
#the service name as host
#for example if we had another container in the same compose
#that wnated to access this service port 2000 we would have written
# databaseservicename:2000
databaseservicename:
image: postgres:13.3-alpine
restart: always
ports:
- "5432"
environment:
POSTGRES_PASSWORD: password
POSTGRES_USER: username
POSTGRES_DB: database
volumes:
- pgdata:/var/lib/postgresql/data
#if we had another container that wanted to access migrate container at let say
#port 1000
#and it's in the same compose file we would have written migrate:1000
migrate:
image: migrate/migrate
depends_on:
- databaseservicename
volumes:
- path/to/you/migration/folder/in/local/computer:/database
# here instead of localhost as the host we use databaseservicename as that is the name we gave to the postgres service
command:
[ "-path", "/database", "-database", "postgres://databaseusername:databasepassword#databaseservicename:5432/database?sslmode=disable", "up" ]
volumes:
pgdata:

Routing to Different SQL Server Instances Running through Docker on Default Port

I can use Traefik for web sites since they use headers when they are connecting.
But I want to have multiple different instances of SQL Server running through docker which will be externally available (outside the docker host, potentially outside the local network)
So, is there anything which allows connecting to different sql server instances running on the same docker instance WITHOUT having to give them different ports or external ip addresses such that someone could access
sql01.docker.local,1433 AND sql02.docker.local,1433 from SQL Tools.
Start Additional Question
Since there has been no replies perhaps there is a way to have different instances like: sql.docker.local\instance1 and sql.docker.local\instance2 though I imagine that may also not be possible
End Additional Question
This is an example of the docker-compose file I was trying to use (before I realised that queries to sql server don't send through a host header - or am I wrong about that?)
version: '2.1'
services:
traefik:
container_name: traefik
image: stefanscherer/traefik-windows
command: --docker.endpoint=tcp://172.28.80.1:2375 --logLevel=DEBUG
ports:
- "8080:8080"
- "80:80"
- "1433:1433"
volumes:
- ./runtest:C:/etc/traefik
- C:/Users/mvukomanovic.admin/.docker:C:/etc/ssl
networks:
- default
restart: unless-stopped
labels:
- "traefik.enable=false"
whoami:
image: stefanscherer/whoami
labels:
- "traefik.backend=whoami"
- "traefik.frontend.entryPoints=http"
- "traefik.port=8080"
- "traefik.frontend.rule=Host:whoami.docker.local"
networks:
- default
restart: unless-stopped
sql01:
image: microsoft/mssql-server-windows-developer
environment:
- ACCEPT_EULA=Y
hostname: sql01
domainname: sql01.local
networks:
- default
restart: unless-stopped
labels:
- "traefik.frontend.rule=Host:sql01.docker.local,sql01,sql01.local"
- "traefik.frontend.entryPoints=mssql"
- "traefik.port=1433"
- "traefik.frontend.port=1433"
networks:
- default
restart: unless-stopped
sql02:
image: microsoft/mssql-server-windows-developer
environment:
- ACCEPT_EULA=Y
hostname: sql02
domainname: sql02.local
networks:
- default
restart: unless-stopped
labels:
- "traefik.frontend.rule=Host:sql02.docker.local,sql02,sql02.local"
- "traefik.frontend.entryPoints=mssql"
- "traefik.port=1433"
- "traefik.frontend.port=1433"
networks:
- default
restart: unless-stopped
networks:
default:
external:
name: nat
As mentionned earlier traefik is not the right solution since it's a HTTP only LoadBalancer.
I can think right now in 3 different ways to achieve what you want to do :
Use a TCP Load Balancer like HAproxy
Setup you server in Docker Swarm Mode (https://docs.docker.com/engine/swarm/), that will allow to bind the same port with a transparent routing between them
Use a service discovery service like consul and SRV records that can abstracts ports number (this might be overkill for your needs and complex to setup)
you can't use traefik, because it's a HTTP reverse proxy.
You're sql server listen and communicate via TCP.
I don't understand what's you're final goal.
Why are you using 2 differents sql-server ?
It depends on what's you want but you may have two solutions:
Can you use a simpler solution ? different databases, roles and permissions for separation.
You can search into the documentation of SQL Server Always On, but it doesn't seems easy to route queries to specific sever.
There is no "virtual" access to databases like for HTTP servers. So - no additional hostnames pointing to same IP can help you.
If you insist on port 1433 for all of your instances, then I see no way for you except to use two different external IPs.
If you were on a Linux box you may try some iptables magic, but it not elegant and would allow access to only one of your instances at any single moment. Windows may have iptables equivalent (I never heard of it) but still only-one-at-a-time you cannot escape.
My advice - use more than one port to expose your servers.

Resources