Quarkus docker container failed to run / connect to the DB - database

In my project, using Quarkus, Angular and PostgreSQL DB, when I run the backend & and the frontend in dev Mode, I can connect to the DB (which is postgreSQL image running in a docker container) and create new lines in the tables and just works fine.
Of course the Quarkus docker file is auto-generated.
Here is the "application.properties" file I typed (inside the Quarkus project) :
quarkus.datasource.db-kind=postgresql
quarkus.datasource.username= username
quarkus.datasource.password= pwd
quarkus.datasource.jdbc.url=jdbc:postgresql://localhost:5432/db-mcs-thirdparty
quarkus.flyway.migrate-at-start=true
quarkus.flyway.baseline-on-migrate=true
quarkus.flyway.out-of-order=false
quarkus.flyway.baseline-version=1
and this is the "docker-compose.yml" file which I placed inside the backend folder (Quarkus):
version: '3.8'
services:
db:
container_name: pg_container
image: postgres:latest
restart: always
environment:
POSTGRES_USER: username
POSTGRES_PASSWORD: pwd
POSTGRES_DB: db-mcs-thirdparty
ports:
- "5432:5432"
pgadmin:
container_name: pgadmin4_container
image: dpage/pgadmin4
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: usernamepgadmin
PGADMIN_DEFAULT_PASSWORD: pwdpgadmin
ports:
- "5050:80"
But when I build a Quarkus docker image and try to run it in docker container, it fails !! knowing that the Angular docker container runs well, also the DB.
Here the error logs which I get after running the container:
Starting the Java application using /opt/jboss/container/java/run/run-java.sh ...
__ ____ __ _____ ___ __ ____ ______
--/ __ \/ / / / _ | / _ \/ //_/ / / / __/
-/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \
--\___\_\____/_/ |_/_/|_/_/|_|\____/___/
2022-05-06 12:58:31,967 WARN [io.agr.pool] (agroal-11) Datasource '<default>': The connection attempt failed.
2022-05-06 12:58:32,015 ERROR [io.qua.run.Application] (main) Failed to start application (with profile prod): java.net.UnknownHostException: db
at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:229)
at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.base/java.net.Socket.connect(Socket.java:609)
at org.postgresql.core.PGStream.createSocket(PGStream.java:241)
at org.postgresql.core.PGStream.<init>(PGStream.java:98)
at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:109)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:235)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:223)
at org.postgresql.Driver.makeConnection(Driver.java:400)
at org.postgresql.Driver.connect(Driver.java:259)
at io.agroal.pool.ConnectionFactory.createConnection(ConnectionFactory.java:210)
at io.agroal.pool.ConnectionPool$CreateConnectionTask.call(ConnectionPool.java:513)
at io.agroal.pool.ConnectionPool$CreateConnectionTask.call(ConnectionPool.java:494)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at io.agroal.pool.util.PriorityScheduledExecutor.beforeExecute(PriorityScheduledExecutor.java:75)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1126)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
So I replaced "localhost" in the:
quarkus.datasource.jdbc.url=jdbc:postgresql://localhost:5432/db-mcs-thirdparty
with the IP address, with the DB's name, even I tried to enter the user name and the psw in that same line, etc ...., but didn't work.
I even stopped all the running containers (DB, frontend) and tried to run only the Quarkus container, the same case happens.
For the ports that I used, you can check the attached image.
the used ports
How should resolve this issue? thank you in advance.

The localhost url stated in application.properties file refers to the containers own system localhost. That means your quarkus application container looking for this port on its own local ports.
As far as i know, every docker-container and every started docker-compose.yaml create their own network, and only within this network the started services can connect through their service-names.
Therefore your quarkus docker container has to connect to the services started by docker-compose. One solution could be to define all services (database, angular and backend) in one docker-compose.yaml and then refers to the service names in your url.
Another solution could be using host.docker.internal instead of localhost.
May further information regarding docker networks and host.docker.internal stated here: https://docs.docker.com/desktop/windows/networking/

You can run your Quarkus container with network mode host (--network host), like this example:
$ docker run --rm -d --network host --name my_nginx nginx
https://docs.docker.com/network/network-tutorial-host/
You could also add your Quarkus to your docker-compose like this example:
https://github.com/quarkusio/quarkus-quickstarts/blob/main/kafka-quickstart/docker-compose.yaml

Related

Docker connect to localhost on a another container

I have 2 containers one running on localhost:3001 and the other on localhost:3003
container localhost:3001 hits an endpoint thats running in the container localhost:3003
But gets a (failed)net::ERR_EMPTY_RESPONSE
I have tried multiple ways to get them to connect. Please don't tell me to setup a docker compose file. I should be able to spin up 2 containers and have them communicate with their ports.
I also tried adding a network.
docker network create my-network
Shows as: ea26d2eaf604 my-network bridge local
Then I spin up both containers with the flag --net my-network.
When container localhost:3001 hits an endpoint thats running in the container localhost:3003 again I get a (failed)net::ERR_EMPTY_RESPONSE
This is driving me crazy. What am I doing wrong here ?
I have even used shell to run a curl from localhost:3001 to localhost:3003 and I get
to curl: (7) Failed to connect to localhost port 3003 after 0 ms: Connection refused
When you place both machines on the same network, they are able to reference each other by their service/container name. (not localhost - as localhost is a local loopback on each container). Using a dedicated network would be the ideal way to connect from container to container.
I suspect what is happening in your case is that you are exposing 3001 from container a on the host, and 3003 from container b on the host. These ports are then open on the host, and from the host you can use localhost, however, from the containers to access the host you should use host.docker.internal which is a reference to the host machine, instead of using localhost
Here is some further reading about host.docker.internal
https://docs.docker.com/desktop/networking/#use-cases-and-workarounds-for-all-platforms
Update: docker-compose example
version: '3.1'
services:
express1:
image: node:latest
networks:
- my-private-network
working_dir: /express
command: node express-entrypoint.js
volumes:
- ./path-to-project1:/express
express2:
image: node:latest
networks:
- my-private-network
working_dir: /express
command: node express-entrypoint.js
volumes:
- ./path-to-project2:/express
networks:
my-private-network:
driver: bridge
ipam:
config:
- subnet: 172.31.27.0/24
Then you can reference each container by their service name, e.g: from express1 you can ping express2

Remotely connect to Database in Container from Host Machine?

I have a Psql running inside a container.
I can connect to via command line by running this
docker-compose exec api-db psql -U {username} {databasename}
But how can I connect to it via a GUI app
What should I use as a host ?
Is it possible ?
I suppose that by "GUI app" you mean a postgres client like PgAdmin or your favorite IDE ?
You just have to configure your client datasource as we can see in your logs I mean :
host : localhost
port : 5432 (or the one mapped in your docker-compose.yml file)
database : It's not "monitor" as it doesn't find it !
user and password : in your docker-compose.yml too. Visibly it's not "root" as it doesn't find it.
Your docker-compose.yml should be at your project's root certainly.
If it's missing in your docker-compose you can add it with :
version: "3.7"
services:
main:
image: postgres:12.7-alpine
ports:
- "5432:5432"
environment:
- POSTGRES_USER=root
- POSTGRES_PASSWORD=password
- POSTGRES_DB=monitor
Best regards.

How can I allow connections by specifying docker-compose host names in postgres's pg_hba.conf file?

I'm trying to allow a connection from one Docker container to a postgres container by specifying the host name of the client container in the server's pg_hba.conf file. Postgres's documentation indicates that a host name can be specified, rather than an IP address. Since I'm using Docker Compose to start the two containers, they should be accessible to each other by container name using Docker Compose's DNS. I don't want to open up all IP addresses for security reasons, and when I eventually add access for additional containers, it will be much easier to just specify the container name in the pg_hba.conf file rather than assign static IP addresses to each of them. However, when I attempt to do this, it fails with a message such as this:
psql: FATAL: no pg_hba.conf entry for host "192.168.208.3", user "postgres", database "postgres", SSL off
Here's a minimum reproducible example of what I'm trying to do:
I use the following Docker Compose file:
version: '3'
services:
postgresdb:
image: postgres:9.4
container_name: postgres-server
ports:
- "5432:5432"
volumes:
- "postgres-data:/var/lib/postgresql/data"
postgres-client:
image: postgres:9.4
container_name: postgres-client
depends_on:
- postgres-server
volumes:
postgres-data:
After running docker-compose up, I exec into the server container and modify the pg_hba.conf file in /var/lib/postgresql/data to look like this:
host all postgres postgres-client trust
I then restart the postgres server (docker-compose down then docker-compose up) and it loads the modified pg_hba.conf from the mounted volume.
I exec into the client container and attempt to connect to the postgres server:
docker exec -it postgres-client /bin/bash
psql -U postgres -h postgres-server postgres
This is where I get an error such as the following:
psql: FATAL: no pg_hba.conf entry for host "192.168.208.3", user "postgres", database "postgres", SSL off
I can't seem to find anything online that shows how to get this working. I've found examples where they just open up all or a range of IP addresses, but none where they get the use of a host name working. Here are some related questions and information:
https://www.postgresql.org/docs/9.4/auth-pg-hba-conf.htm
Allow docker container to connect to a local/host postgres database
https://dba.stackexchange.com/questions/212020/using-host-names-in-pg-hba-conf
Any ideas on how to get this working the way I would expect it to work using Docker Compose?
You need to add the full qualified host name of the client container in pg_hba.conf.
host all postgres postgres-client.<network_name> trust
e.g:
host all postgres postgres-client.postgreshostresolution_default trust
If no network has been defined, network_name is <project_name>_default.
By default project_name is the folder the docker-compose.yml resides.
To get the network names you may also call
docker inspect postgres-client | grep Networks -A1
or
docker network ls
to get a list of all docker networks currently defined on your docker host

Container internal communication [duplicate]

I have the following docker-compose file:
version: "3"
services:
scraper-api:
build: ./ATPScraper
volumes:
- ./ATPScraper:/usr/src/app
ports:
- "5000:80"
test-app:
build: ./test-app
volumes:
- "./test-app:/app"
- "/app/node_modules"
ports:
- "3001:3000"
environment:
- NODE_ENV=development
depends_on:
- scraper-api
Which build the following Dockerfile's:
scraper-api (a python flask application):
FROM python:3.7.3-alpine
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "./app.py"]
test-app (a test react application for the api):
# base image
FROM node:12.2.0-alpine
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:/app/src/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /app/package.json
RUN npm install --silent
RUN npm install react-scripts#3.0.1 -g --silent
RUN npm install axios -g
# start app
CMD ["npm", "start"]
Admittedly, I'm a newbie when it comes to Docker networking, but I am trying to get the react app to communicate with the scraper-api. For example, the scraper-api has the following endpoint: /api/top_10. I have tried various permutations of the following url:
http://scraper-api:80/api/test_api. None of them have been working for me.
I've been scavenging the internet and I can't really find a solution.
The React application runs in the end user's browser, which has no idea this "Docker" thing exists at all and doesn't know about any of the Docker Compose networking setup. For browser apps that happen to be hosted out of Docker, they need to be configured to use the host's DNS name or IP address, and the published port of the back-end service.
A common setup (Docker or otherwise) is to put both the browser apps and the back-end application behind a reverse proxy. In that case you can use relative URLs without host names like /api/..., and they will be interpreted as "the same host and port", which bypasses this problem entirely.
As a side note: when no network is specified inside docker-compose.yml, default network will be created for you with the following name [dir location of docker_compose.yml]_default. For example, if docker_compose.yml is in app folder. the network will be named app_default.
Now, inside this network, containers are reachable by their service names. So scraper-api host should resolve to the right container.
It could be that you are using wrong endpoint URL. In the question, you mentioned /api/top_10 as an endpoint, but URL to test was http://scraper-api:80/api/test_api which is inconsistent.
Also, it could be that you confused the order of the ports in docker-compose.yml for scraper-api service:
ports:
- "5000:80"
5000 is being exposed to host where docker is running. 80 is internal app port. Normally, flask apps are listening on 5000, so I thought you might have meant to say:
ports:
- "80:5000"
In which case, between containers you have to use :5000 as destination port in URLs: http://scraper-api:5000 as an example (+ endpoint suffix, of course).
To check connectivity, you might want to bash into client container, and see if things are connecting:
docker-compose exec test-app bash
wget http://scraper-api
wget http://scraper-api:5000
etc.
If you get a response, then you have connectivity, just need to figure out correct endpoint URL.

Connect docker-compose to external database

I have a set up of 4 containers that need to talk to each other and two of those need to connect to an external database.
I started working with composer and link everything together.
The containers are able to talk with each other without many issues, however they can't connect to the external database.
The external DB is up and running and I can easily connect to it via shell.
The docker-compose file looks like this:
version: "3"
services:
bridge:
# version => 2.1.4
build: ./lora-gateway-bridge
ports:
- "1680/udp:1700/udp"
links:
- emqtt
- redis
environment:
- MQTT_SERVER=tcp://emqtt:1883
networks:
- external
restart: unless-stopped
loraserver:
# version => 0.16.1
build: ./loraserver
links:
- redis
- emqtt
- lora-app-server
environment:
- NET_ID=010203
- REDIS_URL=redis://redis:6379
- DB_AUTOMIGRATE=true
- POSTGRES_DSN=${SQL_STRING} ###<- connection string
- BAND=EU_863_870
ports:
- "8000:8000"
restart: unless-stopped
lora-app-server:
build: ./lora-app-server
# version => 0.8.0
links:
- emqtt
- redis
volumes:
- "/opt/lora-app-server/certs:/opt/lora-app-server/certs"
environment:
- POSTGRES_DSN=${SQL_STRING} ### <- connection string
- REDIS_URL=redis://redis:6379
- NS_SERVER=loraserver:8000
- MQTT_SERVER=tcp://emqtt:1883
ports:
- "8001:8001"
- "443:8080"
restart: unless-stopped
redis:
image: redis:3.0.7-alpine
restart: unless-stopped
emqtt:
image: erlio/docker-vernemq:latest
volumes:
- ./emqttd/usernames/vmq.passwd:/etc/vernemq/vmq.passwd
ports:
- "1883:1883"
- "18083:18083"
restart: unless-stopped
It seems like they are unable to find the host where the database is running.
All the example that I see talk about a database inside the docker-compose, but I haven't quite grasp how to connect the container to an external service.
From your code I see that you need to connect to an external PostgreSQL server.
Networks
Being able to discover some resource in the network is related to which network is being used.
There is a set of network types that can be used, which simplify the setup, and there is also the option to create your own networks and add containers to them.
You have a number of types that you can choose from, the top has the most isolation possible:
closed containers = you have only the loopback inside the container but no interactions with the container virtual network and neither with the host network
bridged containers = your containers are connected through a default bridge network which is connected finally to the host network
joined containers = your containers network is the same and no isolation is present at that level (), also has connection to the host network
open containers = full access to the host network
The default type is bridge so you will have all containers using one default bridge network.
In docker-compose.yml you can choose a network type from network_mode
Because you haven't defined any network and haven't changed the network_mode, you get to use the default - bridge.
This means that your containers will join the default bridge network and every container will have access to each other and to the host network.
Therefore your problem does not reside with the container network. And you should check if PostgreSQL is accessible for remote connections. For example you can access PostgreSQL from localhost by default but you need to configure any other remote connection access rules.
You can configure your PostgreSQL instance by following this answer or this blog post.
Inspect networks
Following are some commands that might be useful in your scenario:
list your available networks with: docker network ls
inspect which container uses bridge network: docker network inspect --format "{{ json .Containers }}" bridge
inspect container networks: docker inspect --format "{{ json .NetworkSettings.Networks }}" myContainer
Testing connection
In order to test the connection you can create a container that runs psql and tries to connect to your remote PostgreSQL server, thus isolating to a minimum environment to test your case.
Dockerfile can be:
FROM ubuntu
RUN apt-get update
RUN apt-get install -y postgresql-client
ENV PGPASSWORD myPassword
CMD psql --host=10.100.100.123 --port=5432 --username=postgres -c "SELECT 'SUCCESS !!!';"
Then you can build the image with: docker build -t test-connection .
And finally you can run the container with: docker run --rm test-connection:latest
If your connection succeeds then SUCCESS !!! will be printed.
Note: connecting with localhost as in CMD psql --host=localhost --port=5432 --username=postgres -c "SELECT 'SUCCESS !!!';" will not work as the localhost from within the container is the container itself and will be different than the main host. Therefore the address needs to be one that is discoverable.
Note: if you would start your container as a closed container using docker run --rm --net none test-connection:latest, there will be no other network interface than loopback and the connection will fail. Just to show how choosing a network may influence the outcome.

Resources