Docker connect to localhost on a another container - reactjs

I have 2 containers one running on localhost:3001 and the other on localhost:3003
container localhost:3001 hits an endpoint thats running in the container localhost:3003
But gets a (failed)net::ERR_EMPTY_RESPONSE
I have tried multiple ways to get them to connect. Please don't tell me to setup a docker compose file. I should be able to spin up 2 containers and have them communicate with their ports.
I also tried adding a network.
docker network create my-network
Shows as: ea26d2eaf604 my-network bridge local
Then I spin up both containers with the flag --net my-network.
When container localhost:3001 hits an endpoint thats running in the container localhost:3003 again I get a (failed)net::ERR_EMPTY_RESPONSE
This is driving me crazy. What am I doing wrong here ?
I have even used shell to run a curl from localhost:3001 to localhost:3003 and I get
to curl: (7) Failed to connect to localhost port 3003 after 0 ms: Connection refused

When you place both machines on the same network, they are able to reference each other by their service/container name. (not localhost - as localhost is a local loopback on each container). Using a dedicated network would be the ideal way to connect from container to container.
I suspect what is happening in your case is that you are exposing 3001 from container a on the host, and 3003 from container b on the host. These ports are then open on the host, and from the host you can use localhost, however, from the containers to access the host you should use host.docker.internal which is a reference to the host machine, instead of using localhost
Here is some further reading about host.docker.internal
https://docs.docker.com/desktop/networking/#use-cases-and-workarounds-for-all-platforms
Update: docker-compose example
version: '3.1'
services:
express1:
image: node:latest
networks:
- my-private-network
working_dir: /express
command: node express-entrypoint.js
volumes:
- ./path-to-project1:/express
express2:
image: node:latest
networks:
- my-private-network
working_dir: /express
command: node express-entrypoint.js
volumes:
- ./path-to-project2:/express
networks:
my-private-network:
driver: bridge
ipam:
config:
- subnet: 172.31.27.0/24
Then you can reference each container by their service name, e.g: from express1 you can ping express2

Related

Connect a dockerized app to a database from a remote machine via a VPN connection

I'm currently working on a small app that has to fetch data from a SQL Server DB and push it on the cloud. It works correctly, but I would like to dockerize it to make its deployment easier.
The database is on a private network and I have to use a VPN connection to access it for development (in red in the diagram below). In production, the app will be on a VM in the database's network.
I'm still confused with Docker networks and the --publish option.
Here is my docker-compose file for now.
version: "3.4"
services:
myapp:
build:
context: .
network: host
restart: always
ports:
- "128.1.X.Y:1433:1433"
container_name: myapp
But when I connect to the VPN from my machine (remote) and run my image with this configuration, I get this error:
driver failed programming external connectivity on endpoint myapp (bbb3cc...):
Error starting userland proxy: listen tcp4 128.1.X.Y:1433: bind: cannot assign requested address
Simply "1433:1433" does not work either. The database cannot be accessed. Not really sure about "network: host" either...
Does anyone know what I could be doing wrong?
And another thing I'm wondering is, will the Docker config be the same when I will deploy my container on the VM?
Thank you!

MSDTC configuration issues with SQL Server in Docker (Linux) and Windows Host

I'm migrating a local SQL Server development database to run in a Linux docker container (on the same dev machine). When running my integration tests in Visual Studio 2019 on Windows, I receive MSDTC errors:
Exception thrown:
'System.Transactions.TransactionManagerCommunicationException' in
System.Data.dll An exception of type
'System.Transactions.TransactionManagerCommunicationException'
occurred in System.Data.dll but was not handled in user code
Communication with the underlying transaction manager has failed.
Here's my latest iteration of SQL Server in my docker-compose:
services:
sqlserver:
image: mcr.microsoft.com/mssql/server:2019-latest
container_name: SqlServer
restart: always
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=verySecretPassword
- MSSQL_RPC_PORT=13500
- MSSQL_DTC_TCP_PORT=51000
ports:
- "1401:1433"
- "135:13500"
- "51000:51000"
volumes:
- sqldata:/var/opt/mssql
I've tried all sorts of ways to adjust the RPC port to get this working. This is the main MS article. I've tried port 135:135 but it gives the same error. The note in the article at the bottom appears to be related to my issue.
For SQL Server outside of a container or for non-root containers, a
different ephemeral port, such as 13500, must be used in the container
and traffic to port 135 must then be routed to that port. You would
also need to configure port routing rules within the container from
the container port 135 to the ephemeral port.
Also, if you decide to map the container's port 135 to a different
port on the host, such as 13500, then you have to configure port
routing on the host. This enables the docker container to participate
in distributed transactions with the host and with other external
servers.
SQL Server 2019 containers run as a non-root user. I've tried port routing using netsh in windows... and also the MS article links to how to perform port forwarding in Ubuntu... which I'm unable to do even when logged in as root in the SQL Server container... iptables is not installed, and it doesn't let me apt-get install it?? I also updated the DTC options in windows to make it as open as possible, but it had no effect. Not sure what the secret sauce is. Hoping someone else has a similar setup that works.
Thanks for the tip on msdtc config, I got mine working with this compose:
version: '3.4'
services:
sqlserver:
image: mcr.microsoft.com/mssql/server:2019-GA-ubuntu-16.04
container_name: sqlserver
user: root
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=[yourPwd]
- MSSQL_RPC_PORT=135
- MSSQL_DTC_TCP_PORT=51000
ports:
- "1433:1433"
- "135:135"
- "51000:51000"
volumes:
- D:\DockerVolumes\sqlserver:/var/opt/mssql/data

Can not connect to postgres db from datagrid

I can't connect to postgres from datagrip(jetbrains app). I'm trying to connect, but I get this message
Connection to postgres#172.18.0.3 failed.
[08001] Connection to 172.18.0.3:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
All was well yesterday. My db is in docker container, there is yml file:
postgres_host:
image: postgres:10-alpine
restart: always
ports: ["5433:5432"]
volumes:
- /tmp/lib:/var/lib/postgresql/data/pg_data
environment:
- PGDATA=/tmp/lib
And i can connect to db from terminal, i used select inet_server_addr( ), inet_server_port( ); And know i know host and port
inet_server_addr | inet_server_port
------------------+------------------
172.18.0.3 | 5432
but this information did not help me, i have same result
enter image description here
Port's mapping ports: ["5433:5432"] means that postgres docker container is available on localhost:5433 from the host system.
Containers communicate with each other in their network, so you can access your postgres container on postgres_host:5432 from another container, which is created in the same network by docker-compose.

Routing to Different SQL Server Instances Running through Docker on Default Port

I can use Traefik for web sites since they use headers when they are connecting.
But I want to have multiple different instances of SQL Server running through docker which will be externally available (outside the docker host, potentially outside the local network)
So, is there anything which allows connecting to different sql server instances running on the same docker instance WITHOUT having to give them different ports or external ip addresses such that someone could access
sql01.docker.local,1433 AND sql02.docker.local,1433 from SQL Tools.
Start Additional Question
Since there has been no replies perhaps there is a way to have different instances like: sql.docker.local\instance1 and sql.docker.local\instance2 though I imagine that may also not be possible
End Additional Question
This is an example of the docker-compose file I was trying to use (before I realised that queries to sql server don't send through a host header - or am I wrong about that?)
version: '2.1'
services:
traefik:
container_name: traefik
image: stefanscherer/traefik-windows
command: --docker.endpoint=tcp://172.28.80.1:2375 --logLevel=DEBUG
ports:
- "8080:8080"
- "80:80"
- "1433:1433"
volumes:
- ./runtest:C:/etc/traefik
- C:/Users/mvukomanovic.admin/.docker:C:/etc/ssl
networks:
- default
restart: unless-stopped
labels:
- "traefik.enable=false"
whoami:
image: stefanscherer/whoami
labels:
- "traefik.backend=whoami"
- "traefik.frontend.entryPoints=http"
- "traefik.port=8080"
- "traefik.frontend.rule=Host:whoami.docker.local"
networks:
- default
restart: unless-stopped
sql01:
image: microsoft/mssql-server-windows-developer
environment:
- ACCEPT_EULA=Y
hostname: sql01
domainname: sql01.local
networks:
- default
restart: unless-stopped
labels:
- "traefik.frontend.rule=Host:sql01.docker.local,sql01,sql01.local"
- "traefik.frontend.entryPoints=mssql"
- "traefik.port=1433"
- "traefik.frontend.port=1433"
networks:
- default
restart: unless-stopped
sql02:
image: microsoft/mssql-server-windows-developer
environment:
- ACCEPT_EULA=Y
hostname: sql02
domainname: sql02.local
networks:
- default
restart: unless-stopped
labels:
- "traefik.frontend.rule=Host:sql02.docker.local,sql02,sql02.local"
- "traefik.frontend.entryPoints=mssql"
- "traefik.port=1433"
- "traefik.frontend.port=1433"
networks:
- default
restart: unless-stopped
networks:
default:
external:
name: nat
As mentionned earlier traefik is not the right solution since it's a HTTP only LoadBalancer.
I can think right now in 3 different ways to achieve what you want to do :
Use a TCP Load Balancer like HAproxy
Setup you server in Docker Swarm Mode (https://docs.docker.com/engine/swarm/), that will allow to bind the same port with a transparent routing between them
Use a service discovery service like consul and SRV records that can abstracts ports number (this might be overkill for your needs and complex to setup)
you can't use traefik, because it's a HTTP reverse proxy.
You're sql server listen and communicate via TCP.
I don't understand what's you're final goal.
Why are you using 2 differents sql-server ?
It depends on what's you want but you may have two solutions:
Can you use a simpler solution ? different databases, roles and permissions for separation.
You can search into the documentation of SQL Server Always On, but it doesn't seems easy to route queries to specific sever.
There is no "virtual" access to databases like for HTTP servers. So - no additional hostnames pointing to same IP can help you.
If you insist on port 1433 for all of your instances, then I see no way for you except to use two different external IPs.
If you were on a Linux box you may try some iptables magic, but it not elegant and would allow access to only one of your instances at any single moment. Windows may have iptables equivalent (I never heard of it) but still only-one-at-a-time you cannot escape.
My advice - use more than one port to expose your servers.

Connect docker-compose to external database

I have a set up of 4 containers that need to talk to each other and two of those need to connect to an external database.
I started working with composer and link everything together.
The containers are able to talk with each other without many issues, however they can't connect to the external database.
The external DB is up and running and I can easily connect to it via shell.
The docker-compose file looks like this:
version: "3"
services:
bridge:
# version => 2.1.4
build: ./lora-gateway-bridge
ports:
- "1680/udp:1700/udp"
links:
- emqtt
- redis
environment:
- MQTT_SERVER=tcp://emqtt:1883
networks:
- external
restart: unless-stopped
loraserver:
# version => 0.16.1
build: ./loraserver
links:
- redis
- emqtt
- lora-app-server
environment:
- NET_ID=010203
- REDIS_URL=redis://redis:6379
- DB_AUTOMIGRATE=true
- POSTGRES_DSN=${SQL_STRING} ###<- connection string
- BAND=EU_863_870
ports:
- "8000:8000"
restart: unless-stopped
lora-app-server:
build: ./lora-app-server
# version => 0.8.0
links:
- emqtt
- redis
volumes:
- "/opt/lora-app-server/certs:/opt/lora-app-server/certs"
environment:
- POSTGRES_DSN=${SQL_STRING} ### <- connection string
- REDIS_URL=redis://redis:6379
- NS_SERVER=loraserver:8000
- MQTT_SERVER=tcp://emqtt:1883
ports:
- "8001:8001"
- "443:8080"
restart: unless-stopped
redis:
image: redis:3.0.7-alpine
restart: unless-stopped
emqtt:
image: erlio/docker-vernemq:latest
volumes:
- ./emqttd/usernames/vmq.passwd:/etc/vernemq/vmq.passwd
ports:
- "1883:1883"
- "18083:18083"
restart: unless-stopped
It seems like they are unable to find the host where the database is running.
All the example that I see talk about a database inside the docker-compose, but I haven't quite grasp how to connect the container to an external service.
From your code I see that you need to connect to an external PostgreSQL server.
Networks
Being able to discover some resource in the network is related to which network is being used.
There is a set of network types that can be used, which simplify the setup, and there is also the option to create your own networks and add containers to them.
You have a number of types that you can choose from, the top has the most isolation possible:
closed containers = you have only the loopback inside the container but no interactions with the container virtual network and neither with the host network
bridged containers = your containers are connected through a default bridge network which is connected finally to the host network
joined containers = your containers network is the same and no isolation is present at that level (), also has connection to the host network
open containers = full access to the host network
The default type is bridge so you will have all containers using one default bridge network.
In docker-compose.yml you can choose a network type from network_mode
Because you haven't defined any network and haven't changed the network_mode, you get to use the default - bridge.
This means that your containers will join the default bridge network and every container will have access to each other and to the host network.
Therefore your problem does not reside with the container network. And you should check if PostgreSQL is accessible for remote connections. For example you can access PostgreSQL from localhost by default but you need to configure any other remote connection access rules.
You can configure your PostgreSQL instance by following this answer or this blog post.
Inspect networks
Following are some commands that might be useful in your scenario:
list your available networks with: docker network ls
inspect which container uses bridge network: docker network inspect --format "{{ json .Containers }}" bridge
inspect container networks: docker inspect --format "{{ json .NetworkSettings.Networks }}" myContainer
Testing connection
In order to test the connection you can create a container that runs psql and tries to connect to your remote PostgreSQL server, thus isolating to a minimum environment to test your case.
Dockerfile can be:
FROM ubuntu
RUN apt-get update
RUN apt-get install -y postgresql-client
ENV PGPASSWORD myPassword
CMD psql --host=10.100.100.123 --port=5432 --username=postgres -c "SELECT 'SUCCESS !!!';"
Then you can build the image with: docker build -t test-connection .
And finally you can run the container with: docker run --rm test-connection:latest
If your connection succeeds then SUCCESS !!! will be printed.
Note: connecting with localhost as in CMD psql --host=localhost --port=5432 --username=postgres -c "SELECT 'SUCCESS !!!';" will not work as the localhost from within the container is the container itself and will be different than the main host. Therefore the address needs to be one that is discoverable.
Note: if you would start your container as a closed container using docker run --rm --net none test-connection:latest, there will be no other network interface than loopback and the connection will fail. Just to show how choosing a network may influence the outcome.

Resources