Remotely connect to Database in Container from Host Machine? - database

I have a Psql running inside a container.
I can connect to via command line by running this
docker-compose exec api-db psql -U {username} {databasename}
But how can I connect to it via a GUI app
What should I use as a host ?
Is it possible ?

I suppose that by "GUI app" you mean a postgres client like PgAdmin or your favorite IDE ?
You just have to configure your client datasource as we can see in your logs I mean :
host : localhost
port : 5432 (or the one mapped in your docker-compose.yml file)
database : It's not "monitor" as it doesn't find it !
user and password : in your docker-compose.yml too. Visibly it's not "root" as it doesn't find it.
Your docker-compose.yml should be at your project's root certainly.
If it's missing in your docker-compose you can add it with :
version: "3.7"
services:
main:
image: postgres:12.7-alpine
ports:
- "5432:5432"
environment:
- POSTGRES_USER=root
- POSTGRES_PASSWORD=password
- POSTGRES_DB=monitor
Best regards.

Related

What configuration should I provide in docker-compose.yml to allow a spring boot docker container to connect to a remote database?

I try to start 2 containers with the following docker compose file:
version: '2'
services:
client-app:
image: client-app:latest
build: ./client-app/Dockerfile
volumes:
- ./client-app:/usr/src/app
ports:
- 3000:8000
spring-boot-server:
build: ./spring-boot-server/Dockerfile
volumes:
- ./spring-boot-server:/usr/src/app
ports:
- 7000:7000
The spring boot server tries to connect to a remote database server which is on another host and network. Docker successfully starts the client-app containers but fails to start the spring-boot-server. This log is showing that the server crashed because it has failed to connect to the remote database:
2021-01-25 21:02:28.393 INFO 1 --- [main] com.zaxxer.hikari.HikariDataSource: HikariPool-1 - Starting...
2021-01-25 21:02:29.553 ERROR 1 --- [main] com.zaxxer.hikari.pool.HikariPool: HikariPool-1 - Exception during pool initialization.
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
The Dockerfiles of both containers create valid images by which I can run manually the containers. It looks like there are some default network restrictions on containers started by a composing file.
Docker compose version running on Ubuntu:
docker-compose version 1.8.0, build unknown
=============================================
FURTHER INVESTIGATIONS:
I had created a Dockerfile
FROM ubuntu
RUN apt-get update
RUN apt-get install -y mysql-client
CMD mysql -A -P 3306 -h 8.8.8.8 --user=root --password=mypassword -e "SELECT VERSION()" mydatabase
along with a docker-compose.yml
version: '2'
services:
test-remote-db-compose:
build: .
ports:
- 8000:8000
to test aside the connectivity alone with the remote database. The test passed with success.
The problem has been misteriously solved, after doing this , a host mashine reboot and docker-compose up --build.

Cannot access to dockerized SQL Server instance

I have simple docker-compose.yml which contains two services only, my-api and sql-server.
version: '3.0'
services:
sql-server:
image: mcr.microsoft.com/mssql/server:2019-latest
hostname: sql-server
container_name: sql-server
ports:
- "1433:1433"
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=MyPassword01*
- MSSQL_PID=Express
my-api:
ports:
- "8080:5000"
depends_on:
- sql-server
... ommited for clarity
When I docker-compose up --build the containers are ready (I can verify with docker ps)
a7a47b89a17a mcr.microsoft.com/mssql/server:2019-latest "/opt/mssql/bin/perm…" 12 minutes ago Up 11 minutes 0.0.0.0:1433->1433/tcp sql-server
but I cannot access my SQL Server using SSMS.
SSMS login window:
Server Name: localhost,1433
Authentication: SQL Server Authentication
Username: sa
Password: MyPassword01*
Error:
Cannot connect to localhost,1433.
Login failed for user 'sa'. (.Net SqlClient Data Provider)
PS: I also tried with
Server Name: sql-server,1433
but still cannot access
Execute the below code which will display the public ipaddress.
Instead of localhost use this ipaddress
docker inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}" mssqltrek-con1.
I followed the below link to achieve the same.
https://www.sqlshack.com/sql-server-with-a-docker-container-on-windows-server-2016/
I guess you have to debug further. The first thing I guess you can do is to open the bash inside the container and try to connect to your SQL database from the container.
docker exec -it sql-server "bash"
Once inside the container bash, then connect with sqlcmd,
/opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "<YourNewStrong#Passw0rd>"
If you fail to connect inside the container, then I have to assume your SA password is somehow different when you set up the SQL server. But if this is not an issue, that means, you can connect to SQL server inside the container, then you can be rest assured it is something wrong with SSMS on port 1433 in your computer. Make sure the server name is correct.

How can I allow connections by specifying docker-compose host names in postgres's pg_hba.conf file?

I'm trying to allow a connection from one Docker container to a postgres container by specifying the host name of the client container in the server's pg_hba.conf file. Postgres's documentation indicates that a host name can be specified, rather than an IP address. Since I'm using Docker Compose to start the two containers, they should be accessible to each other by container name using Docker Compose's DNS. I don't want to open up all IP addresses for security reasons, and when I eventually add access for additional containers, it will be much easier to just specify the container name in the pg_hba.conf file rather than assign static IP addresses to each of them. However, when I attempt to do this, it fails with a message such as this:
psql: FATAL: no pg_hba.conf entry for host "192.168.208.3", user "postgres", database "postgres", SSL off
Here's a minimum reproducible example of what I'm trying to do:
I use the following Docker Compose file:
version: '3'
services:
postgresdb:
image: postgres:9.4
container_name: postgres-server
ports:
- "5432:5432"
volumes:
- "postgres-data:/var/lib/postgresql/data"
postgres-client:
image: postgres:9.4
container_name: postgres-client
depends_on:
- postgres-server
volumes:
postgres-data:
After running docker-compose up, I exec into the server container and modify the pg_hba.conf file in /var/lib/postgresql/data to look like this:
host all postgres postgres-client trust
I then restart the postgres server (docker-compose down then docker-compose up) and it loads the modified pg_hba.conf from the mounted volume.
I exec into the client container and attempt to connect to the postgres server:
docker exec -it postgres-client /bin/bash
psql -U postgres -h postgres-server postgres
This is where I get an error such as the following:
psql: FATAL: no pg_hba.conf entry for host "192.168.208.3", user "postgres", database "postgres", SSL off
I can't seem to find anything online that shows how to get this working. I've found examples where they just open up all or a range of IP addresses, but none where they get the use of a host name working. Here are some related questions and information:
https://www.postgresql.org/docs/9.4/auth-pg-hba-conf.htm
Allow docker container to connect to a local/host postgres database
https://dba.stackexchange.com/questions/212020/using-host-names-in-pg-hba-conf
Any ideas on how to get this working the way I would expect it to work using Docker Compose?
You need to add the full qualified host name of the client container in pg_hba.conf.
host all postgres postgres-client.<network_name> trust
e.g:
host all postgres postgres-client.postgreshostresolution_default trust
If no network has been defined, network_name is <project_name>_default.
By default project_name is the folder the docker-compose.yml resides.
To get the network names you may also call
docker inspect postgres-client | grep Networks -A1
or
docker network ls
to get a list of all docker networks currently defined on your docker host

Docker-Compose SQL Server database persist data after host restart

My Docker-Compose.yml:
version: "3"
services:
db:
image: microsoft/mssql-server-linux:2017-CU8
ports:
- 1433:1433
deploy:
mode: replicated
replicas: 1
environment:
- ACCEPT_EULA=Y
- MSSQL_SA_PASSWORD=SuperStrongSqlAdminPassword)(*£)($£)
volumes:
- /home/mssql/:/var/opt/mssql/
- /var/opt/mssql/data
As you can see I have a volume mapped to a directory on the host machine: /home/msqql:/var/opt/mssql/
If I do: docker stack deploy -c docker-compose.yml [stack name].
The server starts and I can see data is written to the hosts directory: /home/mssql/*.
I then connect to the server and create a database, tables and add some data.
If I then kill the stack using docker stack rm [stack name], or restart the host for maintenance reasons etc.
When SQL Server starts up again, although the /home/mssql/* still contains the files created by the server initially, if I connect to the server the database/tables/data is gone.
Do I have to re-attach the database when the server restarts somehow, or something else I'm missing maybe?
Thanks

Connect docker-compose to external database

I have a set up of 4 containers that need to talk to each other and two of those need to connect to an external database.
I started working with composer and link everything together.
The containers are able to talk with each other without many issues, however they can't connect to the external database.
The external DB is up and running and I can easily connect to it via shell.
The docker-compose file looks like this:
version: "3"
services:
bridge:
# version => 2.1.4
build: ./lora-gateway-bridge
ports:
- "1680/udp:1700/udp"
links:
- emqtt
- redis
environment:
- MQTT_SERVER=tcp://emqtt:1883
networks:
- external
restart: unless-stopped
loraserver:
# version => 0.16.1
build: ./loraserver
links:
- redis
- emqtt
- lora-app-server
environment:
- NET_ID=010203
- REDIS_URL=redis://redis:6379
- DB_AUTOMIGRATE=true
- POSTGRES_DSN=${SQL_STRING} ###<- connection string
- BAND=EU_863_870
ports:
- "8000:8000"
restart: unless-stopped
lora-app-server:
build: ./lora-app-server
# version => 0.8.0
links:
- emqtt
- redis
volumes:
- "/opt/lora-app-server/certs:/opt/lora-app-server/certs"
environment:
- POSTGRES_DSN=${SQL_STRING} ### <- connection string
- REDIS_URL=redis://redis:6379
- NS_SERVER=loraserver:8000
- MQTT_SERVER=tcp://emqtt:1883
ports:
- "8001:8001"
- "443:8080"
restart: unless-stopped
redis:
image: redis:3.0.7-alpine
restart: unless-stopped
emqtt:
image: erlio/docker-vernemq:latest
volumes:
- ./emqttd/usernames/vmq.passwd:/etc/vernemq/vmq.passwd
ports:
- "1883:1883"
- "18083:18083"
restart: unless-stopped
It seems like they are unable to find the host where the database is running.
All the example that I see talk about a database inside the docker-compose, but I haven't quite grasp how to connect the container to an external service.
From your code I see that you need to connect to an external PostgreSQL server.
Networks
Being able to discover some resource in the network is related to which network is being used.
There is a set of network types that can be used, which simplify the setup, and there is also the option to create your own networks and add containers to them.
You have a number of types that you can choose from, the top has the most isolation possible:
closed containers = you have only the loopback inside the container but no interactions with the container virtual network and neither with the host network
bridged containers = your containers are connected through a default bridge network which is connected finally to the host network
joined containers = your containers network is the same and no isolation is present at that level (), also has connection to the host network
open containers = full access to the host network
The default type is bridge so you will have all containers using one default bridge network.
In docker-compose.yml you can choose a network type from network_mode
Because you haven't defined any network and haven't changed the network_mode, you get to use the default - bridge.
This means that your containers will join the default bridge network and every container will have access to each other and to the host network.
Therefore your problem does not reside with the container network. And you should check if PostgreSQL is accessible for remote connections. For example you can access PostgreSQL from localhost by default but you need to configure any other remote connection access rules.
You can configure your PostgreSQL instance by following this answer or this blog post.
Inspect networks
Following are some commands that might be useful in your scenario:
list your available networks with: docker network ls
inspect which container uses bridge network: docker network inspect --format "{{ json .Containers }}" bridge
inspect container networks: docker inspect --format "{{ json .NetworkSettings.Networks }}" myContainer
Testing connection
In order to test the connection you can create a container that runs psql and tries to connect to your remote PostgreSQL server, thus isolating to a minimum environment to test your case.
Dockerfile can be:
FROM ubuntu
RUN apt-get update
RUN apt-get install -y postgresql-client
ENV PGPASSWORD myPassword
CMD psql --host=10.100.100.123 --port=5432 --username=postgres -c "SELECT 'SUCCESS !!!';"
Then you can build the image with: docker build -t test-connection .
And finally you can run the container with: docker run --rm test-connection:latest
If your connection succeeds then SUCCESS !!! will be printed.
Note: connecting with localhost as in CMD psql --host=localhost --port=5432 --username=postgres -c "SELECT 'SUCCESS !!!';" will not work as the localhost from within the container is the container itself and will be different than the main host. Therefore the address needs to be one that is discoverable.
Note: if you would start your container as a closed container using docker run --rm --net none test-connection:latest, there will be no other network interface than loopback and the connection will fail. Just to show how choosing a network may influence the outcome.

Resources