Why can I not POST from frontend to backend after containerizing my MERN (React+Node/Express+MongoDB) application? - reactjs

new to Docker and containers in general. Trying to containerize a simple MERN-based todo list application. Locally on my PC, I can successfully send HTTP post requests from my React frontend to my Nodejs/Express backend and create a new todo item. I use the 'proxy' field in my client folder's package.json file, as shown below:
React starts up on port 3000, my API server starts up on 3001, and with the proxy field defined, all is good locally.
My issue arises when I containerize the three services (i.e. React, API server, and MongoDB). When I try to make the same fetch post request, I receive the following console error:
I will provide the code for my docker-compose file; perhaps it is useful for helping provide me a solution?
version: '3.7'
services:
client:
depends_on:
- server
build:
context: ./client
dockerfile: Dockerfile
image: jlcomp03/rajant-client
container_name: container_client
command: npm start
volumes:
- ./client/src/:/usr/app/src
- ./client/public:/usr/app/public
# - /usr/app/node_modules
ports:
- "3000:3000"
networks:
- frontend
stdin_open: true
tty: true
server:
depends_on:
- mongo
build:
context: ./server
dockerfile: Dockerfile
image: jlcomp03/rajant-server
container_name: container_server
# command: /usr/src/app/node_modules/.bin/nodemon server.js
volumes:
- ./server/src:/usr/app/src
# - /usr/src/app/node_modules
ports:
- "3001:3001"
links:
- mongo
environment:
- NODE_ENV=development
- MONGODB_CONNSTRING='mongodb://container_mongodb:27017/todo_db'
networks:
- frontend
- backend
mongo:
image: mongo
restart: always
container_name: container_mongodb
volumes:
- mongo-data:/data/db
ports:
- "27017:27017"
networks:
- backend
volumes:
mongo-data:
driver: local
node_modules:
web-root:
driver: local
networks:
backend:
driver: bridge
frontend:
My intuition tells me the issue(s) lies in some configuration parameter I am not addressing in my docker-compose.yml file? Please help!

Your proxy config won't work with containers because of its use of localhost.
The Docker bridge network docs provide some insight why:
Containers on the default bridge network can only access each other by IP addresses, unless you use the --link option, which is considered legacy. On a user-defined bridge network, containers can resolve each other by name or alias.
I'd suggest creating your own bridge network and communicating via container name or alias.
{
"proxy": "http://container_server:3001"
}
Another option is to use http://host.docker.internal:3001.

Related

Docker React Issue / Bug ? ( Docker building React images with stale code)

Basically I've setup a webapp with this stack: db: MySQL , frontend: React.js, backend: FastAPI (Python)
It's SSL Secure (Since the domain is being Mitigated thru by Cloudflare)
NGINX is used for Service endpoints api.domain.com is the API & domain.com for the frontend and some SSL keys stuff.
Even though this is quote on quote "Production", I'm still running a react development server for prototyping.
**PROBLEM:**
I'm running this project on a VPS, when I update the frontend on the VPS then delete all docker containers and images via the commands:
docker rm -vf $(docker ps -aq) #Delete all containers and volumes
docker rmi -f $(docker images -aq) #Delete all images
The changes still remain old, the caching on my browser is disabled aswell and i've tried multiple methods of clearing cache, it's not that.
The bundle.js webpack file still has it's old changes when being received from React, Specifically, I edited some Endpoint constants for production & that constant is stale in bundle.js! when changing stuff for the backend (Python files) it's working just fine and updated upon CMDS: docker-compose up -d & docker-compose up but react is acting up)
I've tried:
docker-compose pull
docker-compose build --no-cache
docker-compose
Made sure all my frontend files where saved infact.
no luck..
The API is running fine on HTTPS & React.js is also running fine, but the changes are just stale, it's very weird.
docker-compose.yml
version: "3.9"
services:
db:
image: mysql:${MYSQL_VERSION}
restart: always
environment:
- MYSQL_DATABASE=${MYSQL_DB}
- MYSQL_USER=${MYSQL_USERNAME}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
ports:
- "${MYSQL_PORT}:${MYSQL_PORT}"
expose:
- "${MYSQL_PORT}"
volumes:
- db:/var/lib/mysql
networks:
- mysql_network
backend:
container_name: fastapi-backend
build: ./backend/app
volumes:
- ./backend:/code
ports:
- "${FASTAPI_PORT}:${FASTAPI_PORT}"
env_file:
- .env
depends_on:
- db
networks:
- mysql_network
- backend
restart: always
frontend:
container_name: react-frontend
build: ./frontend/client
ports:
- "${REACT_PORT}:${REACT_PORT}"
depends_on:
- backend
networks:
- backend
restart: always
volumes:
db:
driver: local
networks:
backend:
driver: bridge
mysql_network:
driver: bridge
This was working for my last release, for some reason it just stopped updating, docker is creating the images fine just with stale code...
This is being ran on Ubuntu 22.04 (Debian-like Linux Distribution)

Front and backend - api call inside docker compose

I have fronted in react and backend in java. I would like to call backend API in docker and indicate host properly in docker compose from frontend container.
docker-compose.yml
services:
app:
image: 'image1'
container_name: app
ports:
- "8080:8080"
volumes:
- app:/var/lib/app/data
client:
image: 'image2'
build:
context: .
args:
REACT_APP_API_BASE_URL: http://app:8080
container_name: client
depends_on:
- app
ports:
- "80:80"
volumes:
- client:/var/lib/client/data
Network is prepared. What should I put in REACT_APP_API_BASE_URL? http://app:8080 is not working. If I put http://localhost:8080 all is working fine but on prod environment localhost isn't correct.

Connection between two docker containers in nextjs in getStaticProps function

I have two separate docker files, one for running nextjs on nginx web server and another for running Laravel on another nginx:
services:
frontendx:
container_name: next_appx
build:
context: ./frontend
dockerfile: Dockerfile
restart: unless-stopped
volumes:
- ./frontend:/var/www/html/frontend
networks:
- app
nginxy:
container_name: nginxy
image: nginx:1.19-alpine
restart: unless-stopped
ports:
- '8080:80'
volumes:
- ./frontend:/var/www/html/frontend
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
depends_on:
- frontendx
networks:
- app
and:
services:
backendx:
container_name: laravelx
build:
context: .
dockerfile: Dockerfile
restart: unless-stopped
ports:
- '8000:8000'
volumes:
- ./:/var/www
- enlive-vendor:/var/www/vendor
- .//docker-xdebug.ini:/usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini
depends_on:
- dbx
networks:
- appx
webserverx:
image: nginx:alpine
container_name: webserverx
restart: unless-stopped
tty: true
ports:
- "8090:80"
- "443:443"
volumes:
- ./:/var/www
- ./nginx/conf.d/:/etc/nginx/conf.d/
networks:
- appx
I can connect to the backend container through axios and address like : http://localhost:8090/api/my/api/address
but when I try to get data through getStaticProps I've got the ECONNREFUSED connection error:
const res = await fetch(`http://localhost:8090/api/address`)
I tried to replace the localhost with container ip address like : 172.20.0.2
but I've got the 504 Gateway error.
That's expected.
With axios, you're calling from the browser which is making the request from your host machine network.
But getStaticProps being a SSR function, is run inside your nextjs container. Therefore it must be able to find your backend in your app network.
Now by your setup, the frontend and backend apps are in different isolated networks and you can't connect them like this. But if you put them all your services in the same network, lets say your app(instead of appx) network, you can easily use docker dns:
const res = await fetch(`http://webserverx/api/address`)
Docker knows how to resolve webserverx to your webserverx container.

Docker + React App: how to use in the Frontend side, files saved in the API (server) side folder?

So, I'm stuck in an issue related to using files stored in a server and not able to display them in the frontend.
My project is:
React + Redux using Docker
The React app is full, i.e., there's an API folder for the backend (react/redux), a CLIENT folder for the frontend (react/libraries) and MongoDB as DB.
Docker Compose creates these 3 parts, API, CLIENT and MONGO in just 1 container.
So, in the frontend, the user is able to select an image as an avatar, and then, this image is sent through the layers and saved in a specific folder (NOT BUILD/PUBLIC etc) inside the API docker image. It's possible to remove/delete and re-select it. Everything's working fine!
The issue is the display of this image in the frontend. The avatar component uses an IMAGE SRC to display it, but I can't find a valid URL to use to be able for the frontend TO SEE that image file saved in the API/server.
Since it's inside a container, I tried all possibilities I could find in Docker documentation... I think the solution relays in the NETWORK docker-compose option, but even though couldn't make it.
Docker Compose File:
version: '3.8'
services:
client:
build: ./client
stdin_open: true
image: my-client
restart: always
ports:
- "3000:3000"
volumes:
- ./client:/client
- /client/node_modules
depends_on:
- api
networks:
mynetwork:
ipv4_address: 172.19.0.9
api:
build: ./api
image: my-api
restart: always
ports:
- "3003:3003"
volumes:
- ./api:/api
- logs:/api/logs
- /api/node_modules
depends_on:
- mongo
networks:
mynetwork:
ipv4_address: 172.19.0.10
mongo:
image: mongo
restart: always
ports:
- "27017:27017"
volumes:
- mongo_data:/data/db
networks:
- mynetwork
volumes:
mongo_data:
logs:
networks:
mynetwork:
driver: bridge
ipam:
config:
- subnet: "172.19.0.0/24"
To summarize, there's a folder in the API side with images/files and I want to reference them as in
<img src="mynetwork:3003/imagefolder/imagefile.png"> or something like that...
I can't believe I have to use this other solution...Another Stackoverflow Reply

Multiple Docker container access to host database in docker compose

I have been researching how to connect multiple docker containers in the same compose file to a database (MySQL/MariaDB) on the local host. Currently, the database is containerized for development but production requires a separate database. Eventually, the database will be deployed to AWS or Azure.
There are lots of similar questions on SO, but none that seem to address this particular situation.
Given the existing docker-compose.yml
version: '3.1'
services:
db:
build:
image: mariadb:10.3
volumes:
- "~/data/lib/mysql:/var/lib/mysql:Z"
api:
image: t-api:latest
depends_on:
- db
web:
image: t-web:latest
scan:
image: t-scan:latest
proxy:
build:
context: .
dockerfile: nginx.Dockerfile
image: t-proxy
depends_on:
- web
ports:
- 80:80
All these services are reversed proxied behind nginx, with both api and scan services requiring access to the database. There are other services requiring database access not shown for simpliticy.
The production compose file would be:
version: '3.1'
api:
image: t-api:latest
depends_on:
- db
web:
image: t-web:latest
scan:
image: t-scan:latest
proxy:
build:
context: .
dockerfile: nginx.Dockerfile
image: t-proxy
depends_on:
- web
ports:
- 80:80
If there was a single container requiring database access, I could just open up the ports 3306:3306, which won't work for multiple containers.
Splitting up the containers breaks the reverse proxy and add's complexity to deployment and management. I've tried extra_hosts
extra_hosts:
- myhost: xx.xx.xx.xx
but this generate EAI_AGAIN DNS errors, which is strange because you can ping the host from inside containers. I realize this may not be possible

Resources