I have a question. I have created simple React and Spring-Boot applications and created dockerfiles for both. Spring-Boot displays some API and React makes requests to it. However, both of them works on ports (React - 3000 and Spring-Boot - 8080). When I made a request, I have my fetch sth like this:
fetch("http://localhost:8080/projects")
How am I supposed to change this, in order to work it with docker-compose? Because when I export ports in docker-compose file this fetch make requests inside the container, not outside of it.
docker-compose:
version: '3'
services:
frontend:
image: "IMAGE/FRONTEND:latest"
ports:
- "3000:3000"
depends_on:
- backend
backend:
image: "IMAGE/BACKEND:latest"
ports:
- "8080:8080"
Here's an example docker compose that will help illustrate how you can do what you are trying to do:
version: '3'
services:
frontend:
image: "alpine:latest" #Using alpine as it has wget
command: sh -c "while true; do wget -q -S -O /dev/null http://backend:80 && sleep 4; done" #This just a sample script that get from the backend service a webpage. Note the usage of the "backend" hostname. it writes to null and only displays headers for brevity. This is just to prove that the front end can reach backend.
depends_on:
- backend
backend:
image: "nginxdemos/hello" # just a dummy image that exposes a html page over port 80
#Notice you dont need to expose ports. Look at docker compose networking for a better understanding of how these two containers are on the same n/w.
Basically like your backend I have used a niginx demo container that serves pages over port 80.
For the front end I have used a shell script that just queries the back end and displays only the headers.
So in your case the problem is that your front end tries to go to localhost for the backend. Whereas the localhost is just pointing to the front end container. You really want it to point to the hostname backend which in turn will route you to containers in the backend service.
To understand how compose networking works please do take a look at https://docs.docker.com/compose/networking/.
Relevant snippet which comes into play in the above example.
By default Compose sets up a single network for your app. Each
container for a service joins the default network and is both
reachable by other containers on that network, and discoverable by
them at a hostname identical to the container name.
Related
I have 2 containers one running on localhost:3001 and the other on localhost:3003
container localhost:3001 hits an endpoint thats running in the container localhost:3003
But gets a (failed)net::ERR_EMPTY_RESPONSE
I have tried multiple ways to get them to connect. Please don't tell me to setup a docker compose file. I should be able to spin up 2 containers and have them communicate with their ports.
I also tried adding a network.
docker network create my-network
Shows as: ea26d2eaf604 my-network bridge local
Then I spin up both containers with the flag --net my-network.
When container localhost:3001 hits an endpoint thats running in the container localhost:3003 again I get a (failed)net::ERR_EMPTY_RESPONSE
This is driving me crazy. What am I doing wrong here ?
I have even used shell to run a curl from localhost:3001 to localhost:3003 and I get
to curl: (7) Failed to connect to localhost port 3003 after 0 ms: Connection refused
When you place both machines on the same network, they are able to reference each other by their service/container name. (not localhost - as localhost is a local loopback on each container). Using a dedicated network would be the ideal way to connect from container to container.
I suspect what is happening in your case is that you are exposing 3001 from container a on the host, and 3003 from container b on the host. These ports are then open on the host, and from the host you can use localhost, however, from the containers to access the host you should use host.docker.internal which is a reference to the host machine, instead of using localhost
Here is some further reading about host.docker.internal
https://docs.docker.com/desktop/networking/#use-cases-and-workarounds-for-all-platforms
Update: docker-compose example
version: '3.1'
services:
express1:
image: node:latest
networks:
- my-private-network
working_dir: /express
command: node express-entrypoint.js
volumes:
- ./path-to-project1:/express
express2:
image: node:latest
networks:
- my-private-network
working_dir: /express
command: node express-entrypoint.js
volumes:
- ./path-to-project2:/express
networks:
my-private-network:
driver: bridge
ipam:
config:
- subnet: 172.31.27.0/24
Then you can reference each container by their service name, e.g: from express1 you can ping express2
I'm trying to dockerize my app. It have an API architecture without using nginx. I'm using this dockerfile for the flask app
FROM python:3.9.0
WORKDIR /ProyectoTitulo
ENV FLASK_APP = app.py
ENV FLASK_ENV = development
COPY ./requirements.txt .
RUN pip install -r requirements.txt
COPY . .
RUN python -m nltk.downloader all
CMD ["python", "app.py"]
This one is my react app dockerfile.
FROM node:16-alpine
WORKDIR /app
COPY ./package.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
Finally this is my docker-compose.yml file
services:
api:
build:
context: .
dockerfile: Dockerfile
image: python-docker
client:
build:
context: .
dockerfile: Dockerfile
image: react-front
ports:
- "3000:3000"
I use build and compose up but when I try to send a HTTP request to and endpoint it says ERR CONNECTION. I need to add something to these files? something to the composer?
One thing is as #bluepuma77 mentioned you need to publish your BE port when that is done and you can locally connect to it you are ready to check the second step.
As I already answered in the SO question similar to your's I will quote my answer here since it will probably be useful to you aswell.
I am no expert on MERN (we mainly run Angular & .Net), but I have to warn you of one thing. We had an issue when setting this up in the beginning as well worked locally in containers but not on our deployment servers because we forgot the basic thing about web applications.
Applications run in your browser, whereas if you deploy an application stack somewhere else, the REST of the services (APIs, DB and such) do not. So referencing your IP/DNS/localhost inside your application won't work, because there is nothing there. A container that contains a WEB application is there to only serve your browser (client) files and then the JS and the logic are executed inside your browser, not the container.
I suspect this might be affecting your ability to connect to the backend.
To solve this you have two options.
Create an HTTP proxy as an additional service and your FE calls that proxy (set up a domain and routing), for instance, Nginx, Traefik, ... and that proxy then can reference your backend with the service name, since it does live in the same environment than API.
Expose the HTTP port directly from the container and then your FE can call remoteServerIP:exposedPort and you will connect directly to the container's interface. (NOTE: I do not recommend this way for real use, only for testing direct connectivity without any proxy)
Well, I think you need to expose the API port, too.
services:
api:
build:
context: .
dockerfile: Dockerfile
image: python-docker
ports:
- "5000:5000" # EXPOSE API
client:
build:
context: .
dockerfile: Dockerfile
image: react-front
ports:
- "3000:3000"
https://github.com/hodanov/react-django-postgres-sample-app
I want to deploy the DB, API, and FRONT containers in the above repository to AWS ECS so that they can be operated.
Therefore, in order to operate the containers separately, the docker-compose.yaml file was divided into containers.
I pushed the container to ECR and operated it with ECS, but it stopped by all means.
Where should I review it?
I spent some time on this. Where do I start :)
First and foremost the compose file has bind mounts that can't work when you deploy to ECS/Fargate (because the local files mounted do not exist on the remote end). So these are the 3 things that needs to be done:
you need to remove the bind mounts for the rest_api and the web_front services.
you need to append ADD ./code/django_rest_api/ /code in DockerfilePython and append ADD ./code/django_web_front/ /code in DockerfileNode. This will basically simulate the bind mount and will include the content of the folder right in the image(s).
you need to add the image field to the three services. That will allow you to docker compose build and docker compose push` so that the images are available to be deployed in the cloud.
This is my resulting compose file (you need to change your images repos):
version: "3"
services:
db:
container_name: django_db
image: mreferre/django_db:latest
build:
context: .
dockerfile: DockerfilePostgres
volumes:
- django_data_volume:/var/lib/postgresql/data
rest_api:
container_name: django_rest_api
image: mreferre/django_rest_api:latest
build:
context: .
dockerfile: DockerfilePython
command: ./run-migrate-and-server.sh
# volumes:
# - ./code/django_rest_api:/code
tty: true
ports:
- 8000:8000
depends_on:
- db
web_front:
container_name: django_web_front
image: mreferre/django_web_front:latest
build:
context: .
dockerfile: DockerfileNode
command: ./npm-install-and-start.sh
# volumes:
# - ./code/django_web_front:/code
tty: true
ports:
- 3000:3000
depends_on:
- rest_api
volumes:
django_data_volume:
This will deploy just fine with docker compose up in the ECS context and the API server will work. HOWEVER, the React front end will not work because the App.js code points to localhost:
const constants = {
"url": "http://localhost:8000/api/profile/?format=json"
}
I don't know what the React equivalent is but in Angular you need to point to something along the lines of window.location.href to tell the app to point to the same endpoint your browser connected to.
If you fix this problem then it should just work(tm).
so I'm developing a basic Express backend for a React app.
The request is being made like this:
axios.get(`${serverLocation}/api/graph/32`).then(res => {
this.setState({datag: res.data});
for(var key in this.state) {
data.push(this.state[key]);
}
});
Server locations looks like http://IP:PORT.
The api is correct and everything I can see and on my development machine it works. React makes successful requests to the server at specified location etc. The thing is, when I put this into 2 separate docker containers via docker-compose.yml it won't work.
This is my docker-compose.yml:
version: '2.0'
services:
server:
restart: always
container_name: varno_domov_server
build: .
ports:
- "8088:5000"
links:
- react
networks:
- varnodomovnetwork
react:
restart: always
container_name: varno_domov_client
build: client/
ports:
- "8080:3000"
networks:
- varnodomovnetwork
networks:
varnodomovnetwork:
driver: bridge
I also have custom Dockerfiles, the server looking like:
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 5000
CMD [ "npm", "start" ]
And the client looking like:
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "npm", "start" ]
If you've made it this far reading, thank you for taking the time. I am open to any suggestions regarding docker here, the React part is not written by me. If any additional information is required, tell me in the comments. Isolation is making me very available :)
So, the thing was that React was submitting requests to the server. I am inexperienced with React, so I was looking for logs in the terminal/bash when they were actually available in the browser to look at.
The problem was, that my server was on a public IP and communicating via HTTP. This meant the browser blocked the content (Mixed Content: The page at was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint), making my graphs display no data. An easy fix is to just make the browser go through with unsafe content, although I am not about that life. So I did the following:
The key problem was that my server and client are 2 separate containers. Therefore, on separate ports. What I've done is edit my nginx configuration to proxy any requests to my domain looking like "https://www.example.come/api" to be forwarded to the port of the server container on the server machine.
Hope this is of any help to someone :)
I have a set up of 4 containers that need to talk to each other and two of those need to connect to an external database.
I started working with composer and link everything together.
The containers are able to talk with each other without many issues, however they can't connect to the external database.
The external DB is up and running and I can easily connect to it via shell.
The docker-compose file looks like this:
version: "3"
services:
bridge:
# version => 2.1.4
build: ./lora-gateway-bridge
ports:
- "1680/udp:1700/udp"
links:
- emqtt
- redis
environment:
- MQTT_SERVER=tcp://emqtt:1883
networks:
- external
restart: unless-stopped
loraserver:
# version => 0.16.1
build: ./loraserver
links:
- redis
- emqtt
- lora-app-server
environment:
- NET_ID=010203
- REDIS_URL=redis://redis:6379
- DB_AUTOMIGRATE=true
- POSTGRES_DSN=${SQL_STRING} ###<- connection string
- BAND=EU_863_870
ports:
- "8000:8000"
restart: unless-stopped
lora-app-server:
build: ./lora-app-server
# version => 0.8.0
links:
- emqtt
- redis
volumes:
- "/opt/lora-app-server/certs:/opt/lora-app-server/certs"
environment:
- POSTGRES_DSN=${SQL_STRING} ### <- connection string
- REDIS_URL=redis://redis:6379
- NS_SERVER=loraserver:8000
- MQTT_SERVER=tcp://emqtt:1883
ports:
- "8001:8001"
- "443:8080"
restart: unless-stopped
redis:
image: redis:3.0.7-alpine
restart: unless-stopped
emqtt:
image: erlio/docker-vernemq:latest
volumes:
- ./emqttd/usernames/vmq.passwd:/etc/vernemq/vmq.passwd
ports:
- "1883:1883"
- "18083:18083"
restart: unless-stopped
It seems like they are unable to find the host where the database is running.
All the example that I see talk about a database inside the docker-compose, but I haven't quite grasp how to connect the container to an external service.
From your code I see that you need to connect to an external PostgreSQL server.
Networks
Being able to discover some resource in the network is related to which network is being used.
There is a set of network types that can be used, which simplify the setup, and there is also the option to create your own networks and add containers to them.
You have a number of types that you can choose from, the top has the most isolation possible:
closed containers = you have only the loopback inside the container but no interactions with the container virtual network and neither with the host network
bridged containers = your containers are connected through a default bridge network which is connected finally to the host network
joined containers = your containers network is the same and no isolation is present at that level (), also has connection to the host network
open containers = full access to the host network
The default type is bridge so you will have all containers using one default bridge network.
In docker-compose.yml you can choose a network type from network_mode
Because you haven't defined any network and haven't changed the network_mode, you get to use the default - bridge.
This means that your containers will join the default bridge network and every container will have access to each other and to the host network.
Therefore your problem does not reside with the container network. And you should check if PostgreSQL is accessible for remote connections. For example you can access PostgreSQL from localhost by default but you need to configure any other remote connection access rules.
You can configure your PostgreSQL instance by following this answer or this blog post.
Inspect networks
Following are some commands that might be useful in your scenario:
list your available networks with: docker network ls
inspect which container uses bridge network: docker network inspect --format "{{ json .Containers }}" bridge
inspect container networks: docker inspect --format "{{ json .NetworkSettings.Networks }}" myContainer
Testing connection
In order to test the connection you can create a container that runs psql and tries to connect to your remote PostgreSQL server, thus isolating to a minimum environment to test your case.
Dockerfile can be:
FROM ubuntu
RUN apt-get update
RUN apt-get install -y postgresql-client
ENV PGPASSWORD myPassword
CMD psql --host=10.100.100.123 --port=5432 --username=postgres -c "SELECT 'SUCCESS !!!';"
Then you can build the image with: docker build -t test-connection .
And finally you can run the container with: docker run --rm test-connection:latest
If your connection succeeds then SUCCESS !!! will be printed.
Note: connecting with localhost as in CMD psql --host=localhost --port=5432 --username=postgres -c "SELECT 'SUCCESS !!!';" will not work as the localhost from within the container is the container itself and will be different than the main host. Therefore the address needs to be one that is discoverable.
Note: if you would start your container as a closed container using docker run --rm --net none test-connection:latest, there will be no other network interface than loopback and the connection will fail. Just to show how choosing a network may influence the outcome.