Proxying API Requests in Docker Container running react app - reactjs

I'm running a simple react app in a docker container. During development I'm using the proxy key in package.json to specify my backend api url: "proxy": "http://localhost:5000"
Everything works fine when I run npm start locally. However, when I npm start inside a docker container it's pointing to "http://localhost:3000". I'm tried setting the proxy manually as well, as demonstrated by my Dockerfile below, but nothing seems to work:
FROM node:13-alpine
WORKDIR /app
# install dependencies
COPY package*.json ./
RUN npm install --silent
# copy source code
COPY src/ ./src/
COPY public/ ./public/
RUN npm config set proxy http://localhost:5000 # set manully
CMD ["npm", "start"]
Am I doing something wrong or is this not possible?

You need to set the port to your backend service instead of localhost while running the app in docker. Check the following docker container and it's services for example. We have the frontend running in port 3000 and backend running in port 5000. So, replace localhost with "proxy": "http://backend:5000"
version: '3'
services:
backend:
build: ./backend
ports:
- 5000:5000
frontend:
build: ./frontend
ports:
- 3000:3000
links:
- backend
command: npm start

"proxy": "http://localhost:5000" works perfectly fine in the development stage, because the webpack DevServer handles proxying by itself. Once you deploy your React application, it stops operating. I have experienced the same problem when trying to make my containerized React application talk to the containerized API. I was using Nginx as a web server to serve the React application. I followed this guide to integrate Nginx with a Docker container. This is how the nginx.conf initially looked like:
server {
listen 80;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
but after I made a few tweaks here and there, I came up with this configuration (I am going to talk what api stands for in a bit):
server {
listen 80;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
location /api {
resolver 127.0.0.11;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://api:8000;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
What has changed? I added a root location for the API endpoints, since all of them have a common prefix /api. The proxy_pass property lets us proxy all the request coming to the /api to our backend that is exposed via port 8000. api is a just a name of the container defined in the docker-compose.yaml.
For the reference, this is my React app's Dockerfile:
# build environment
FROM node:15.2.1 as build
WORKDIR /app
COPY ./client ./
RUN yarn
RUN yarn build
# production environment
FROM nginx:stable-alpine
COPY --from=build /app/build /usr/share/nginx/html
COPY --from=build /app/nginx/nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
and the most important file (docker-compose.yaml):
version: "3.8"
services:
client:
build:
context: .
dockerfile: client/Dockerfile
container_name: $CLIENT_CONTAINER_NAME
restart: unless-stopped
env_file: .env
ports:
- "1337:80"
networks:
- my-network
links:
- api
api:
build:
context: .
dockerfile: server/Dockerfile
container_name: $API_CONTAINER_NAME
restart: unless-stopped
env_file: .env
ports:
- "8000:8000"
networks:
- my-network
links:
- mongo
mongo:
image: mongo
container_name: $DB_CONTAINER_NAME
restart: unless-stopped
env_file: .env
environment:
- MONGO_INITDB_ROOT_USERNAME=$MONGO_INITDB_ROOT_USERNAME
- MONGO_INITDB_ROOT_PASSWORD=$MONGO_INITDB_ROOT_PASSWORD
- DB_NAME=$DB_NAME
- MONGO_HOSTNAME=$MONGO_HOSTNAME
volumes:
- ~/data/db:/data/db
ports:
- 27017:27017
networks:
- my-network
networks:
my-network:
driver: bridge

If you are using Docker, in your client's package.json, instead of "proxy":"http://localhost:5000", you need change it to "proxy":"https://<container_name>:5000"
For example, since I name my express container as "express-server", I need to add this:
// in client-side's package.json
"proxy": "http://express-server:5000"

You're now running your React app inside a Docker container, so your "localhost" is no longer your local machine, but the container instead. You need to proxy-it to your backend's IP. Are you running the API in another container?

Related

docker-compose - CORS policy error when connecting to back-end from browser

I'm trying to set up a few Docker containers to deploy a web application. I have a PostgreSQL container, a Spring back-end container and a React front-end container. The interaction between the back-end and database works fine, but I'm struggling to set up the interaction between the front-end and back-end. I want to use docker-compose and after setting it up, I want to test this set-up using the browser on my computer (the host).
I followed some tutorials and looked online, but after all changes I made, I still get this error when using my browser on localhost:80.
My docker-compose.yml looks like this
version: '3'
services:
backend:
container_name: backend
image: "backend:latest"
build:
context: ./backend
ports:
- 5002:8080
frontend:
container_name: frontend
image: "frontend:latest"
build:
context: ./frontend
ports:
- 80:80
networks:
default:
external:
name: my-network
The Dockerfile in the front-end directory looks like this.
# build environment
FROM node:13.12.0-alpine as build
WORKDIR .
ENV PATH node_modules/.bin:$PATH
COPY package.json ./
COPY package-lock.json ./
RUN npm install --silent
RUN npm install react-scripts#3.4.1 -g --silent
COPY . .
RUN npm run build
# production environment
FROM nginx:stable-alpine
COPY --from=build ./build /usr/share/nginx/html
COPY nginx/nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
After some research, I added a CORS configuration to the nginx.conf, but this doesn't seems to solve the problem.
server {
listen 80;
server_name frontend;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
#
# CORS config for nginx
#
location /api {
#
# the request made to localhost/api are enabled to CORS
#
add_header 'Access-Control-Allow-Origin' '*';
#
# the request made to localhost/api forwards to backend:8080 service
#
proxy_pass http://backend:8080;
}
}
I don't really know where to look for the problem. To make API calls, I use axios and it is set up like this.
const instance = axios.create({
baseURL: "http://localhost:5002/api"
});
Any pointers would be greatly appreciated!

How to get NGINX proxy_pass to multiple ReactJS apps created using create-react-app?

I am using create-react-app to build a bunch of react apps for various modules in my application, for ex: Users, Leads, Campaigns, etc.
I have containerized all the apps using Docker and NGINX.
I would like to have one nginx gateway which will redirect to my apps based on the path.
For example:
https://www.example.com/users --> users app
https://www.example.com/products --> products app
I tried setting this up based on various articles I came across on the internet, but with no success. I basically have two problems:
Referencing static files - Create React app usually injects a script tag such as <script src="/static/....."></script> in the index.html file, which prevents the HTML from loading the scripts (as it looks for them in the gateway's root directory). I was able to fix that by setting the build script to use a PUBLIC_URL variable set to /users or /products as required.
After I set PUBLIC_URL and built the container, now NGINX for some reason gives me a 301 Moved permanently response and doesn't proxy properly.
I am not sure what I am missing. I'm sure this is a very common use case.
For each of my react apps, I have created a Dockerfile as follows
FROM node:16-alpine as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY . ./
RUN npm install
RUN npm run build
# production environment
FROM nginx:stable-alpine
COPY --from=build /app/build /usr/share/nginx/html
COPY nginx/nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
And an nginx.conf as follows:
server {
listen 80;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
add_header Access-Control-Allow-Origin *;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
client_max_body_size 100M;
}
Here is the nginx.conf for my gateway:
server {
listen 80;
location /products/ {
proxy_pass http://products/;
}
location /users/ {
proxy_pass http://users/;
}
location /sales/ {
proxy_pass http://sales/;
}
}
And finally, my docker-compose.yml
version: "3.9"
networks:
ylib:
services:
nginx:
image: nginx
container_name: nginx
ports:
- 8888:80
networks:
- ylib
products:
image: products
container_name: products
ports:
- 8881:80
networks:
- ylib
sales:
image: sales
container_name: sales
ports:
- 8882:80
networks:
- ylib
orders:
image: orders
container_name: orders
ports:
- 8883:80
networks:
- ylib
Please help me out.
Thanks

reactjs in docker building react

I have a problem I am building my reactjs app in a buildstep in docker and then using nginx to run it. However I can not get it to be able to connect to my api running in another container
The relevant parts of my docker-cmpose.yml are here
api:
build:
dockerfile: Dockerfile
context: "./api"
depends_on:
- mysql_db
volumes:
- /app/node_modules
- ./api:/app
environment:
<<: *common-variables
MYSQL_HOST_IP: mysql_db
networks:
- my-network
frontend:
depends_on:
- api
stdin_open: true
environment:
API_URL: http://api:3030/v1
build:
dockerfile: Dockerfile
context: ./frontend
ports:
- "8090:80"
volumes:
- /app/node_modules
- ./frontend:/app
networks:
- my-network
networks:
my-network:
name: my-network
The relevant bit from my Dockerfile is
COPY package.json ./
COPY ./ ./
RUN npm i && npm run build
# production environment
FROM nginx:stable-alpine
WORKDIR /app
COPY --from=build /app/build /usr/share/nginx/html
COPY ./nginx/default.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
And finally in my default.conf for ngnx I have
upstream api {
server api:3030;
}
server {
listen 80;
listen [::]:80;
root /usr/share/nginx/html;
index index.html index.htm index.nginx-debian.html;
server_name xxxx.com www.xxxx.com
location / {
try_files $uri $uri/ =404;
}
location /api {
rewrite /api/(.*) /$1 break;
proxy_pass http://api;
}
}
The problem I am having is it wont resolve to the api
Please can anyone help me to get it to resolve to the ip/url for the api running in its container?
Thanks in advance
The React application runs in a browser, and it can't use any of the Docker networking features. In particular, since it runs in a browser, it can't resolve the api host name, which only exists in containers running on the same Compose network, and your browser isn't in a container.
You already have your Nginx setup to proxy /api to the back-end application, and I'd just use that path. You can probably set something like API_URL: /api/v1 with no host part. Your browser will interpret this as a path-only relative URL and use the host and port the application is otherwise running on.

Docker - served react app, asset-manifest.json with incorrect filenames

I'm new to web development, and I run into a strange error.
I have a React/Django app which I'm trying to productionize with nginx and docker.
Django runs on a postgres db, and nginx just reroutes port 80 to my react and django ports.
When I locally deploy the application using
npm run build
serve -s build
everything works as desired.
However, doing the same through Docker doesn't.
I have a Dockerfile building the react application:
FROM node:12.18.3-alpine3.9 as builder
WORKDIR /usr/src/app
COPY ./react-app/package.json .
RUN apk add --no-cache --virtual .gyp \
python \
make \
g++ \
&& npm install \
&& apk del .gyp
COPY ./react-app .
RUN npm run build
FROM node:12.18.3-alpine3.9
WORKDIR /usr/src/app
RUN npm install -g serve
COPY --from=builder /usr/src/app/build ./build
Now when I use
docker-compose build
docker-compose up
I see that my Django, React, Postgres and nginx containers are all running, with nginx visible at port 80. When I open localhost in my browser, I see nginx is looking for some static react files in the right directory. However, the react files it is looking for have a different hash than the static files. The static files of both the nginx and react container are the same. So somehow, my asset-manifest.json contains the wrong filenames. Any idea what causes this is or how I can solve this?
Edit: Added docker-compose.yml:
version: "3.7"
services:
django:
build:
context: ./backend
dockerfile: Dockerfile
volumes:
- django_static_volume:/usr/src/app/static
expose:
- 8000
env_file:
- ./backend/.env
command: gunicorn core.wsgi:application --bind 0.0.0.0:8000
depends_on:
- db
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./postgres/.env
react:
build:
context: ./frontend
dockerfile: Dockerfile
volumes:
- react_static_volume:/usr/src/app/build/static
expose:
- 5000
env_file:
- .env
command: serve -s build
depends_on:
- django
nginx:
restart: always
build: ./nginx
volumes:
- django_static_volume:/usr/src/app/django_files/static
- react_static_volume:/usr/src/app/react_files/static
ports:
- 80:80
depends_on:
- react
volumes:
postgres_data:
django_static_volume:
react_static_volume:
Do you need to run React in a separate container? Is there any reason for doing this? (It might be)
In my approach, I'm building React static files in nginx Dockerfile, and copy them to /usr/share/nginx/html. Then nginx serves it at location /.
nginx Dockerfile
# The first stage
# Build React static files
FROM node:13.12.0-alpine as build
WORKDIR /app/frontend
COPY ./frontend/package.json ./
COPY ./frontend/package-lock.json ./
RUN npm ci --silent
COPY ./frontend/ ./
RUN npm run build
# The second stage
# Copy React static files and start nginx
FROM nginx:stable-alpine
COPY --from=build /app/frontend/build /usr/share/nginx/html
CMD ["nginx", "-g", "daemon off;"]
nginx configuration file
server {
listen 80;
server_name _;
server_tokens off;
client_max_body_size 20M;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
location /api {
try_files $uri #proxy_api;
}
location /admin {
try_files $uri #proxy_api;
}
location #proxy_api {
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Url-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://backend:8000;
}
location /django_static/ {
autoindex on;
alias /app/backend/server/django_static/;
}
}
Docker-compose
version: '2'
services:
nginx:
restart: unless-stopped
build:
context: .
dockerfile: ./docker/nginx/Dockerfile
ports:
- 80:80
volumes:
- static_volume:/app/backend/server/django_static
- ./docker/nginx/development:/etc/nginx/conf.d
depends_on:
- backend
backend:
restart: unless-stopped
build:
context: .
dockerfile: ./docker/backend/Dockerfile
volumes:
entrypoint: /app/docker/backend/wsgi-entrypoint.sh
volumes:
- static_volume:/app/backend/server/django_static
expose:
- 8000
volumes:
static_volume: {}
Please check my article
Docker-Compose for Django and React with Nginx reverse-proxy and Let's encrypt certificate for more details. There is also example of how to issue Let's encrypt certificate and renew it in docker-compose. If you will need more help, please let me know.

Configure Nginx for React and Flask with Docker-Compose

I am trying to configure multiple Docker containers for a website with Nginx.
I have docker-compose working to spin up each container but I'm having trouble getting the React app to hit the Gunicorn WSGI server for the Flask API's when using Nginx.
Any idea why this might happen? It works fine without Nginx in the picture. Do I need an Nginx conf for the flask app also? Or is it a case of routing requests to the Gunicorn WSGI server from Nginx?
React frontend (container)
# build environment
FROM node:12.2.0-alpine as build
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
COPY package.json /usr/src/app/package.json
RUN npm install --silent
RUN npm install react-scripts#3.0.1 -g --silent
COPY . /usr/src/app
RUN npm run build
# production environment
FROM nginx:1.16.0-alpine
COPY --from=build /usr/src/app/build /usr/share/nginx/html
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx/nginx.conf /etc/nginx/conf.d
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Nginx.conf
server {
listen 80;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Docker-compose
version: '3.7'
services:
middleware:
build:
context: ./middleware
dockerfile: Dockerfile.prod
command: gunicorn --bind 0.0.0.0:5000 main:app
ports:
- 5000:5000
env_file:
- ./.env.prod
frontend:
container_name: frontend
build:
context: ./frontend/app
dockerfile: Dockerfile.prod
ports:
- '80:80'
Yes, you need to proxy the traffic in Nginx to the WSGI app, something like
server {
listen 80;
server_name your_domain www.your_domain;
location / {
include uwsgi_params;
uwsgi_pass unix:/home/sammy/myproject/myproject.sock;
}
}
read more here
Update
In this particular case, you will need to proxy to Gunicorn which is in a separate container, visible under the name middleware.
Because of that the uwsgi_pass directive should refer to that container:
server {
listen 80;
server_name your_domain www.your_domain;
location / {
include uwsgi_params;
uwsgi_pass http://middleware:5000;
}
}
Please mind, that if you're using Gunicorn -it recommends using proxy_pass, not uwsgi_pass.

Resources