Through Docker compose I build up a set of images. Among them, there's a node.js/express server and an angularjs 2.0 frontend.
Here's the docker-compose file:
version: "3.3"
services:
angular:
image: micheleminno/angular-client:latest
build: ./angular-client
ports:
- "4200:4200"
express:
image: micheleminno/express-server:latest
build: ./express-server
depends_on:
- mysql
- elasticsearch
ports:
- "3000:3000"
networks:
- sql
- nosql
elasticsearch:
build: elasticsearch/
ports:
- "9200:9200"
- "9300:9300"
networks:
- nosql
environment:
- MAX_OPEN_FILES=1048576
cap_add:
- IPC_LOCK
ulimits:
memlock:
soft: -1
hard: -1
command: echo "Elasticsearch disabled"
kibana:
build: kibana/
volumes:
- ./kibana/config/:/usr/share/kibana/config
ports:
- "5601:5601"
networks:
- nosql
depends_on:
- elasticsearch
command: echo "Kibana disabled"
mysql:
image: mysql:5.5
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=real-affinities
- MYSQL_USER=production
- MYSQL_PASSWORD=production
ports:
- "3306:3306"
networks:
- sql
migrations:
image: micheleminno/db-migrations:latest
environment:
- NODE_ENV=production
depends_on:
- mysql
networks:
- sql
networks:
nosql:
driver: bridge
sql:
driver: bridge
When everything starts, loading page http://localhost:4200/ triggers the following GET request to the express server:
https://localhost:3000/target
but the request doesn't succeed:
Whereas if I do by myself the GET request, I can get results correctly:
Update
Here's my CORS section in server.js:
const cors = require('cors');
const app = express();
app.use(cors());
You have to enable CORS in your Express Server. The CORS spec requires the OPTIONS call to precede the POST or GET if the POST or GET has any non-simple content or headers in it.
As you can see in your error pic, the target is a xhr OPTIONS request. See also the "Preflighted requests" section in the link above.
One possible and simple solution is to use the node.js package cors.
I changed https://localhost:3000 to http://localhost:3000 as url for the http requests from angular client and now it works.
Related
new to Docker and containers in general. Trying to containerize a simple MERN-based todo list application. Locally on my PC, I can successfully send HTTP post requests from my React frontend to my Nodejs/Express backend and create a new todo item. I use the 'proxy' field in my client folder's package.json file, as shown below:
React starts up on port 3000, my API server starts up on 3001, and with the proxy field defined, all is good locally.
My issue arises when I containerize the three services (i.e. React, API server, and MongoDB). When I try to make the same fetch post request, I receive the following console error:
I will provide the code for my docker-compose file; perhaps it is useful for helping provide me a solution?
version: '3.7'
services:
client:
depends_on:
- server
build:
context: ./client
dockerfile: Dockerfile
image: jlcomp03/rajant-client
container_name: container_client
command: npm start
volumes:
- ./client/src/:/usr/app/src
- ./client/public:/usr/app/public
# - /usr/app/node_modules
ports:
- "3000:3000"
networks:
- frontend
stdin_open: true
tty: true
server:
depends_on:
- mongo
build:
context: ./server
dockerfile: Dockerfile
image: jlcomp03/rajant-server
container_name: container_server
# command: /usr/src/app/node_modules/.bin/nodemon server.js
volumes:
- ./server/src:/usr/app/src
# - /usr/src/app/node_modules
ports:
- "3001:3001"
links:
- mongo
environment:
- NODE_ENV=development
- MONGODB_CONNSTRING='mongodb://container_mongodb:27017/todo_db'
networks:
- frontend
- backend
mongo:
image: mongo
restart: always
container_name: container_mongodb
volumes:
- mongo-data:/data/db
ports:
- "27017:27017"
networks:
- backend
volumes:
mongo-data:
driver: local
node_modules:
web-root:
driver: local
networks:
backend:
driver: bridge
frontend:
My intuition tells me the issue(s) lies in some configuration parameter I am not addressing in my docker-compose.yml file? Please help!
Your proxy config won't work with containers because of its use of localhost.
The Docker bridge network docs provide some insight why:
Containers on the default bridge network can only access each other by IP addresses, unless you use the --link option, which is considered legacy. On a user-defined bridge network, containers can resolve each other by name or alias.
I'd suggest creating your own bridge network and communicating via container name or alias.
{
"proxy": "http://container_server:3001"
}
Another option is to use http://host.docker.internal:3001.
I have a docker compose that containss a react app and other django container. They are in the same network so when I try to make curl request from the react container to one of the django services using the service name it works but in the web app it doesn't work and it said :
POST http://backend-account:8000/api/auth/login/ net::ERR_NAME_NOT_RESOLVED
this is my docker compose :
version: "3.9"
services:
db-account:
restart: always
container_name: ctr-db-account-service
image: mysql:8
environment:
- MYSQL_DATABASE=dtp_db
- MYSQL_USER=admin
- MYSQL_PASSWORD=ictf
- MYSQL_HOST=db
- MYSQL_PORT=3306
- MYSQL_ROOT_HOST=%
- MYSQL_ROOT_PASSWORD=root
volumes:
- account-data:/var/lib/mysql
networks:
- dtp-network
db-stream:
restart: always
container_name: ctr-db-stream-service
image: mysql:8
environment:
- MYSQL_DATABASE=dtp_db
- MYSQL_USER=admin
- MYSQL_PASSWORD=ictf
- MYSQL_HOST=db
- MYSQL_PORT=3306
- MYSQL_ROOT_HOST=%
- MYSQL_ROOT_PASSWORD=root
volumes:
- stream-data:/var/lib/mysql
networks:
- dtp-network
backend-account:
restart: always
container_name: ctr-account-service
command:
# bash -c "python check_db.py --service-name db --ip db --port 3306 &&
bash -c "sleep 20 &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
env_file:
- ./dtp-account-management-app/account_management/.env
build:
context: ./dtp-account-management-app/account_management/
dockerfile: Dockerfile
expose:
- 8000
ports:
- "8080:8000"
depends_on:
- db-account
links:
- db-account
networks:
- dtp-network
backend-stream:
restart: always
container_name: ctr-stream-service
command:
# bash -c "python check_db.py --service-name db --ip db --port 3306 &&
bash -c "sleep 20 &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:7000"
env_file:
- ./dtp-stream-management-app/stream_management/.env
build:
context: ./dtp-stream-management-app/stream_management/
dockerfile: Dockerfile
expose:
- 7000
ports:
- "7000:7000"
depends_on:
- db-stream
links:
- db-stream
networks:
- dtp-network
frontend:
restart: always
command: npm start
container_name: ctr-frontend-service
build:
context: ./dtp-frontend-app/
dockerfile: Dockerfile
ports:
- "3000:3000"
stdin_open: true
depends_on:
- backend-account
- backend-stream
links:
- backend-stream
- backend-account
networks:
- dtp-network
networks:
dtp-network:
driver: bridge
volumes:
account-data:
driver: local
stream-data:
driver: local
Additionally when the error occurred I get nothing in the terminal, more like there is no communication, but trying by running the curl request in the react container i got this response:
/react # curl -i -X GET --url http://backend-account:8000/api/auth/login/
HTTP/1.1 200 OK
Date: Mon, 13 Sep 2021 11:25:06 GMT
Server: WSGIServer/0.2 CPython/3.9.6
Content-Type: application/json
Vary: Accept, Origin
Allow: POST, OPTIONS
X-Frame-Options: DENY
Content-Length: 40
X-Content-Type-Options: nosniff
Referrer-Policy: same-origin
Your web app (react) is ultimately run in the user's browser and not in the container. At that moment they are not in the same docker network and using the service name won't work like it does when you use curl from within the container.
So you need to expose your service on your server so that the users from its own machine at home using their browser can make the network request.
Then you need to use the server IP address or domain name you have set up, in your frontend code to hit the Django backend.
Since you have already published the ports of your backends, 8080:8000 and 7000:7000, you can hit that service on your server IP if your firewall permits.
For example, use one of those in your frontend code.
http://<your-server-ip>:8080
# or
http://<your-server-ip>:7000
That said, I would advise to purchase a domain and set a DNS record pointing to your server. Then you could also serve a proper SSL certificate, encrypting the traffic.
On a side note, if you wanted only internal communication between services with docker, then you don't need to publish the ports like you did. This may or may not lead to security issues. Your database, for example, doesn't have the ports published, and the backend can still connect. But as I said, this is more of a random side fact than being part of the actual answer. To solve your problem, you need to do what I have described above.
So I'm running a web app which consist of 3 services with docker-compose.
A mongodb database container.
A nodejs backend.
A nginx container with static build folder which serves a react app.
Locally it runs fine and I'm very happy, when trying to deploy to a vps I'm facing an issue.
I've set the vps' nginx to reverse proxy to port 8000 which serves the react app, it runs as expected but I can not send requests to the backend, when I'm logged in the vps I can curl it and it responds, but when the web app sends requests, they hang.
My docker-compose:
version: '3.7'
services:
server:
build:
context: ./server
dockerfile: Dockerfile
image: server
container_name: node-server
command: /usr/src/app/node_modules/.bin/nodemon server.js
depends_on:
- mongo
env_file: ./server/.env
ports:
- '8080:4000'
environment:
- NODE_ENV=production
networks:
- app-network
mongo:
image: mongo:4.2.7-bionic
container_name: database
hostname: mongo
environment:
- MONGO_INITDB_ROOT_USERNAME=...
- MONGO_INITDB_ROOT_PASSWORD=...
- MONGO_INITDB_DATABASE=admin
restart: always
ports:
- 27017:27017
volumes:
- ./mongo/init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
networks:
- app-network
client:
build:
context: ./client
dockerfile: prod.Dockerfile
image: client-build
container_name: react-client-build
env_file: ./client/.env
depends_on:
- server
ports:
- '8000:80'
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
data-volume:
node_modules:
web-root:
driver: local
I've developed an Angular App, that communicates with an UWSGI Flask Api throught Nginx. Currently I've 3 containers(Angular [web_admin], Api [api_admin], Nginx[nginx])
When I'm running it in my development machine, the communication is working alright. The angular requests goes through the url: http://localhost:5000 and the api response well, everything is working well.
But when I deployed it to my Production Server, I noticed that the application is not working, because the port 5000 is not opened in my firewall.
My question is kind simple, how do I make the angular container, call the api container, through internal network, instead of calling it from the external?
version: '2'
services:
data:
build: data
neo4j:
image: neo4j:3.0
networks:
- back
volumes_from:
- data
ports:
- "7474:7474"
- "7473:7473"
- "7687:7687"
volumes:
- /var/diariooficial/neo4j/data:/data
web_admin:
build: frontend/web
networks:
- front
- back
ports:
- "8001:8001"
depends_on:
- api_admin
links:
- "api_admin:api_admin"
volumes:
- /var/diariooficial/upload/diario_oficial/:/var/diariooficial/upload/diario_oficial/
api_admin:
build: backend/api
volumes_from:
- data
networks:
- back
ports:
- "5000:5000"
depends_on:
- neo4j
- neo4jtest
volumes:
- /var/diariooficial/upload/diario_oficial/:/var/diariooficial/upload/diario_oficial/
nginx:
build: nginx
volumes_from:
- data
networks:
- back
- front
ports:
- "80:80"
- "443:443"
volumes:
- /var/diariooficial/log/nginx:/var/log/nginx
depends_on:
- api_admin
- web_admin
networks:
front:
back:
Links create DNS names on the network for the services. You should have the web_admin service talk to api_admin:5000 instead of localhost:5000. The api_admin DNS name will resolve to the IP address of one of the api_admin service.
See https://docs.docker.com/compose/networking/ for an explanation, specifically:
Each container can now look up the hostname web or db and get back the appropriate container’s IP address. For example, web’s application code could connect to the URL postgres://db:5432 and start using the Postgres database.
My application has three containers: db, frontend-web(React), backend-api
How can I get my backend-api address in frontend-web
Here is my compose file
version: '2'
services:
db:
image: postgres
ports:
- "5432:5432"
web:
build: .
stdin_open: true
volumes:
- .:/usr/src/app
ports:
- "3000:3000"
environment:
- API_URL=http://api:8080/
links:
- api
depends_on:
- api
api:
build: ./api
stdin_open: true
volumes:
- ./api:/usr/src/app
ports:
- "8080:3000"
links:
- db
depends_on:
- db
I can't get the address both api and process.env.API_URL
Add the container name to the service description as follows:
api:
build: ./api
container_name: api
stdin_open: true
volumes:
- ./api:/usr/src/app
ports:
- "8080:3000"
links:
- db
depends_on:
- db
You can then use the container name as a host name to connect to. See https://docs.docker.com/compose/compose-file/#/containername
I am admitting that your server at the web container is just providing the static htmls and not working as a proxy for the api container server.
So, once that you mapped the ports to the host machine, you could use the host machines name/ip to find the api server.
If your host machine name is app.myserver.dev, in your API_URL env var you can use the config below and docker will do the work for you:
web:
build: .
stdin_open: true
volumes:
- .:/usr/src/app
ports:
- "3000:3000"
environment:
- API_URL=http://app.myserver.dev:8080/
links:
- api
depends_on:
- api