Can't setup nginx proxy to docker container with react - reactjs

I'm trying to build web application based on docker containers and included usage of symfony and react. The problem is my container with nginx does not proxy my container with react running in development mode.Requests to backend by /api/... works as well, but when I'm trying to access to frontend domain.com for example, I'v got 502 error.
My nginx configuration:
upstream frontend {
server frontend:8080;
}
server {
set $APP_ENV "dev";
set $APP_DEBUG "1";
listen 80;
listen [::]:80 default_server;
server_name store.com;
root /var/www/store/public;
location /api {
try_files $uri /index.php$is_args$args;
}
location /oauth {
try_files $uri /index.php$is_args$args;
}
location /_wdt {
# try to serve file directly, fallback to app.php
try_files $uri /index.php$is_args$args;
}
location /_profiler {
# try to serve file directly, fallback to app.php
try_files $uri /index.php$is_args$args;
}
# DEV
# This rule should only be placed on your development environment
# In production, don't include this and don't deploy app_dev.php or config.php
location ~ ^/(index)\.php(/|$) {
fastcgi_pass php:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
# When you are using symlinks to link the document root to the
# current version of your application, you should pass the real
# application path instead of the path to the symlink to PHP
# FPM.
# Otherwise, PHP's OPcache may not properly detect changes to
# your PHP files (see https://github.com/zendtech/ZendOptimizerPlus/issues/126
# for more information).
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
}
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
# return 404 for all other php files not matching the front controller
# this prevents access to other php files you don't want to be accessible.
location ~ \.php$ {
return 404;
}
location / {
proxy_pass http://frontend/;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
error_log /var/log/nginx/project_error.log;
access_log /var/log/nginx/project_access.log;
}
docker-compose
version: '3'
services:
php:
build: php
working_dir: /var/www/store
links:
- mysql
volumes:
- ../backend:/var/www/store
- ./php/php.ini:/usr/local/etc/php/php.ini:ro
networks:
- backend
- frontend
environment:
XDEBUG_CONFIG: remote_host=192.168.31.32
nginx:
image: nginx
links:
- php
- frontend
ports:
- "80:80"
- "443:443"
networks:
- backend
- frontend
volumes:
- ../backend:/var/www/store
- ../frontend:/var/www/app
- ./nginx/vhosts/dev/default.conf:/etc/nginx/conf.d/default.conf:ro
mysql:
restart: always
image: mysql:5.6
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
networks:
- backend
volumes:
- mysql-data:/var/lib/mysql
ports:
- "3306:3306"
frontend:
image: node:latest
user: node
command: bash -c "npm install && npm start"
working_dir: /home/node/app
networks:
- frontend
volumes:
- ../frontend:/home/node/app
networks:
frontend:
backend:
volumes:
mysql-data:

The problem was not in the nginx or docker configuration, the problem was in the configuration of webpack dev server. Resolved by using command "start": "webpack-dev-server --host 0.0.0.0 --inline --content-base" for start dev server, and some additional config
devServer: {
disableHostCheck: true,
historyApiFallback: true
}

Related

React project in a docker container returns net::ERR_CONNECTION_TIMED_OUT

I have a React project which is Dockerized along with asp.net core 6.0 app and SQL Server database. the production yaml file is as below:
version: '3.9'
services:
# UI Container spec. note that 'ui' is the name of the container internally (also 'container_name')
ui:
container_name: myapp-ui-prod
image: myapp-ui-prod
env_file: ./UI/.env
build:
context: ./UI
dockerfile: DockerFile_UI.prod
ports:
- 1337:80
networks:
- psnetwork
links:
- api
# Database Container spec.
sql:
container_name: myapp-sql
image: myapp-sql
environment:
ACCEPT_EULA: 'Y'
SA_PASSWORD: 'Pa55w0rd'
build:
context: ./DockerDB
dockerfile: DockerFile_SQL
ports:
- 1633:1433 # Map 1433 from inside the container to 1633 host to avoid port conflict with local install
networks:
- psnetwork
# API container spec.
api:
container_name: myapp-api
image: myapp-api
build:
context: ./Api
dockerfile: DockerFile_API
environment:
ASPNETCORE_ENVIRONMENT: Development
ASPNETCORE_URLS: http://+:5555
ports:
- "5555:5555"
networks:
- psnetwork
links:
- sql
networks:
psnetwork:
driver: bridge
The React project can ping api container successfully while when I call the api graphql endpoint through http it gives net::ERR_CONNECTION_TIMED_OUT error. The nginx configuration file is as below:
upstream api_backend {
server api:5555;
keepalive 8;
}
server {
listen 80;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# server_name example.com;
location /api {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://api/api;
proxy_redirect off;
}
}
# server {
# }
And the ApolloClient configuration in React app is:
const client = new ApolloClient({
uri: process.env.REACT_APP_BASE_URL+ '/api/graphql',
cache: new InMemoryCache()
});
the .env file contents is:
REACT_APP_BASE_URL="http://api:5555"
The project works when I use localhost:5555 IP address.

How to add Reactjs code to django app on docker-compose with nginx-proxy acme-companion

I am trying to setup a complete django react webapp via docker-compose on AWS. I went through a tutorial to create a django backend with database and ssl via nginx-proxy and letsencrypt acme-companion.
Everything works so far, but I struggle to add reactjs code as the frontend. I created a frontend folder with react-code and a Dockerfile to create the static files:
# Dockerfile frontend
FROM node:15.13-alpine as build
WORKDIR /frontend
# add `/app/node_modules/.bin` to $PATH
ENV PATH /frontend/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
RUN npm ci --silent
COPY . ./
RUN npm run build
# The second stage
# Copy React static files
FROM nginx:stable-alpine
COPY --from=build /frontend/build /usr/share/nginx/html
I tried to change the default file in nginx/vhost.d/default to access static frontend files as default and the django-backend-app via /api:
# nginx/vhost.d/default
server {
listen 80;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
location /api {
try_files $uri #proxy_api;
}
location /admin {
try_files $uri #proxy_api;
}
location #proxy_api {
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Url-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://backend:8000;
}
location /django_static/ {
autoindex on;
alias /app/backend/server/django_static/;
}
}
Here is the docker-compose file:
# docker-compose.yml
version: '3.8'
services:
backend:
platform: linux/amd64
build:
context: ./django
dockerfile: Dockerfile.prod
logging:
driver: "awslogs"
options:
awslogs-region: "eu-central-1"
awslogs-group: "acquirepad_nginx_proxy"
awslogs-stream: "web"
image: "${BACKEND_IMAGE}"
command: gunicorn core.wsgi:application --bind 0.0.0.0:8000 --log-level=debug
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
expose:
- 8000
env_file:
- ./.env
frontend:
build:
context: ./frontend
volumes:
- react_build:/frontend/build
nginx-proxy:
container_name: nginx-proxy
build: nginx
logging:
driver: "awslogs"
options:
awslogs-region: "eu-central-1"
awslogs-group: "acquirepad_nginx_proxy"
awslogs-stream: "nginx-proxy"
image: "${NGINX_IMAGE}"
restart: always
ports:
- 443:443
- 80:80
volumes:
- react_build:/var/www/frontend
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
- /var/run/docker.sock:/tmp/docker.sock:ro
depends_on:
- frontend
- backend
nginx-proxy-letsencrypt:
platform: linux/amd64
logging:
driver: "awslogs"
options:
awslogs-region: "eu-central-1"
awslogs-group: "acquirepad_nginx_proxy"
awslogs-stream: "nginx-proxy-letsencrypt"
image: nginxproxy/acme-companion
env_file:
- ./.env.staging.proxy-companion
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
- acme:/etc/acme.sh
depends_on:
- nginx-proxy
volumes:
static_volume:
media_volume:
certs:
html:
vhost:
acme:
react_build:
When I run docker-compose on the AWS-EC2 instance, the django backend is still displayed by default on the website and I can not get access to the frontend. I have the feeling, that the file /nginx/vhost.d/default does not have any influence on the webapp at all. Help is much appreciated.

Getting a blank page in react app while deploying using ngnix as reverse proxy

I have deployed a FASTAPI application with react as a frontend using docker-compose and nginx as reverse proxy.
When I try to visit the website I'm getting a blank page, but other services(backend) are working properly, also favicon and website name in navbar are loading.
I looked into the console, and seems like react is unable to locate the other static files.
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2;
server_name pbl.asia www.pbl.asia;
server_tokens off;
location = /favicon.ico {root /usr/share/nginx/html;}
root /usr/share/nginx/html;
index index.html index.htm;
location = / {
try_files $uri /index.html;
}
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass "http://backend:8000";
}
ssl_certificate /etc/letsencrypt/live/pbl.asia/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/pbl.asia/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
This is my ngnix config file.
# Frontend
frontend:
build:
context: frontend
container_name: frontend
depends_on:
- backend
volumes:
- react_build:/frontend/build
# Nginx service
nginx:
image: nginx:1.21-alpine
ports:
- 80:80
- 443:443
volumes:
- ./nginx:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
- react_build:/usr/share/nginx/html
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
depends_on:
- backend
- frontend
restart: always
docker-compose.yaml
FROM node:16.8.0-slim
WORKDIR /frontend
COPY package.json ./
RUN npm install
COPY . ./
RUN npm run build
This is my Dockerfile
Specifying the index inside the location block solved the issue for me.
root /usr/share/nginx/html;
location = /home {
index index.html index.htm;
try_files $uri /index.html;
}
location ~ "^\/([0-9a-zA-Z+=-\?\/-_]{7,})$" {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass 'http://backend:8000';
}
Also specified the separate regex for the backend part, otherwise NGINX would route all the requests to the backend resulting in internal server error.

Why my domain gives me the old website while my IP address gives me the new one I deployed?

I recently bought a domain at AWS Route 53, and routed to my EC2 Instance with its elastic IP, record type A.
I made a kind of reverse proxy for manual blue-green deployment.(I don't yet know how to do that automatically) My nginx version is "nginx/1.18.0 (Ubuntu)", and I'm using Django Rest Framework as backend and React JS as frontend. The below is codes which are seemed to be related with this issue.
/etc/nginx/nginx.conf
...Same as default...
http {
...Same as default...
sendfile off;
...Same as default...
include /etc/nginx/conf.d/*.conf;
# include /etc/nginx/sites-enabled/*;
}
...Same as default...
/etc/nginx/conf.d/default.conf
upstream main {
server 192.168.0.1:8080; # This IP address is the gateway of docker network
}
server {
listen 80;
listen [::]:80;
listen 443 ssl;
listen [::]:443 ssl;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
location / {
proxy_pass http://main;
add_header 'Cache-Control' 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
proxy_no_cache 1;
proxy_cache_bypass 1;
proxy_cache off;
}
}
PROJECT_FOLDER/docker-compose.yml
version: '3.3'
services:
backend-blue:
build:
context: ./backend
args:
DJANGO_ALLOWED_HOSTS: stuff
DJANGO_SECRET_KEY: stuff
DJANGO_CORS_ORIGIN_WHITELIST: stuff
BACKEND_ADMIN: stuff
RDS_HOSTNAME: stuff
RDS_PORT: stuff
RDS_DB_NAME: stuff
RDS_USERNAME: stuff
RDS_PASSWORD: stuff
S3_ACCESS_KEY_ID: stuff
S3_SECRET_ACCESS_KEY: stuff
S3_BUCKET_NAME: stuff
DEBUG: stuff
EMAIL_HOST_USER: stuff
EMAIL_HOST_PASSWORD: stuff
environment:
CHOKIDAR_USEPOLLING: "true"
command: gunicorn backend.wsgi --bind 0.0.0.0:8000
ports:
- "8000:8000"
frontend-blue:
build:
context: ./frontend
environment:
CHOKIDAR_USEPOLLING: "true"
volumes:
- build_folder:/frontend/build
nginx-blue:
image: nginx:latest
ports:
- "8080:8080"
- "8081:8081"
volumes:
- ./webserver/nginx-proxy.conf:/etc/nginx/conf.d/default.conf
- build_folder:/var/www/frontend
depends_on:
- backend-blue
- frontend-blue
volumes:
build_folder:
PROJECT_FOLDER/webserver/nginx-proxy.conf
upstream api {
server backend-blue:8000;
}
server {
listen 8080;
listen 8081;
location /api/ {
proxy_pass http://api$request_uri;
}
# ignore cache frontend
location ~* (service-worker\.js)$ {
add_header 'Cache-Control' 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
expires off;
proxy_no_cache 1;
}
location / {
root /var/www/frontend;
try_files $uri $uri/ /index.html;
}
}
Let's say the domain I bought is example.com. The problem is this. I deployed blue deployment with docker-compose up --build -d onto my EC2 Instance. And I changed IP gateway at /etc/nginx/conf.d/default.conf with docker gateway created by new deployment(docker network inspect myproject_network). And I executed sudo nginx -s reload.
But when I'm trying to access with http://example.com, it shows the old version of the website, while http://<my-EC2-elastic-IP> shows newly deployed website.
Also I was not able to do the https protocol despite mydomain.net has a certificate.
I'm having pretty hard times with this issue😂
Any help will be hugely appreciated😁

cannot access laravel over https in docker

We have two docker containters. one running our angular app and one running our laravel api. Each has their own docker-compose file.
On our localhost there was no issue making api calls from angular to laravel over 127.0.0.1:3000
Then I took these containers and started them up on my Ubuntu server. Still no problem making calls over 195.xxx.xxx.xx:3000
I then added a ssl certificate to the domain and all of the sudden I can not make calls to the api over port 3000
Can anyone tell me where I am going wrong. I have tried different ports. If I remove the certbot stuff and call over http it all works fine again. Please please help...
For my ssl setup I followed this article and got it all setup without any real issues
Here is to docker setup for laravel
Dockerfile:
FROM php:7.3-fpm
# Copy composer.lock and composer.json
COPY composer.lock composer.json /var/www/
# Set working directory
WORKDIR /var/www
# Install dependencies
RUN apt-get update && apt-get install -y \
build-essential \
mariadb-client \
libpng-dev \
libjpeg62-turbo-dev \
libfreetype6-dev \
locales \
zip \
jpegoptim optipng pngquant gifsicle \
vim \
unzip \
git \
curl \
libzip-dev
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install extensions
RUN docker-php-ext-install pdo_mysql mbstring zip exif pcntl
RUN docker-php-ext-configure gd --with-gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ --with-png-dir=/usr/include/
RUN docker-php-ext-install gd
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
# Add user for laravel application
RUN groupadd -g 1000 www
RUN useradd -u 1000 -ms /bin/bash -g www www
# Copy existing application directory contents
COPY . /var/www
# Copy existing application directory permissions
COPY --chown=www:www . /var/www
# Change current user to www
USER www
# Expose port 3000 and start php-fpm server
EXPOSE 3000
CMD php-fpm
docker-compose.yml
version: "3"
services:
#PHP Service
api:
build:
context: .
dockerfile: Dockerfile
image: laravel360
container_name: app
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: app
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./:/var/www
- ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
networks:
- app-network
#Nginx Service
webserver:
image: nginx:alpine
container_name: webserver
restart: unless-stopped
tty: true
ports:
- "3000:80"
- "3001:443"
volumes:
- ./:/var/www
- ./nginx/conf.d/:/etc/nginx/conf.d/
networks:
- app-network
#MySQL Service
db:
image: mysql:5.7.22
container_name: db
restart: unless-stopped
tty: true
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: name
MYSQL_ROOT_PASSWORD: password
SERVICE_TAGS: dev
SERVICE_NAME: mysql
volumes:
- dbdata:/var/lib/mysql/
- ./mysql/my.cnf:/etc/mysql/my.cnf
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
#Volumes
volumes:
dbdata:
driver: local
Any finally the config file
server {
listen 80;
client_max_body_size 100M;
index index.php index.html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/public;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
}
server {
listen 443 ssl;
client_max_body_size 100M;
index index.php index.html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/public;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
}
Angular Docker
#############
### build ###
#############
# base image
FROM node:alpine as build
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /app/package.json
RUN npm install
RUN npm install -g #angular/cli#~9.1.0
# add app
COPY . /app
# run tests
# RUN ng test --watch=false
# RUN ng e2e --port 4202
# generate build
RUN ng build --output-path=dist
FROM nginx:alpine
COPY --from=build /app/dist /usr/share/nginx/html
EXPOSE 80 443
CMD [ "nginx", "-g", "daemon off;" ]
Docker Compose
version: '3'
services:
angular:
container_name: angular
build:
context: .
dockerfile: Dockerfile-prod
ports:
- "80:80"
- "443:443"
volumes:
- ./data/nginx:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
certbot:
image: certbot/certbot
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
And then finally my nginx conf for the angular side
server {
listen 80;
server_name mydomaindotcom;
root /usr/share/nginx/html;
index index.html;
location / {
try_files $uri /index.html;
}
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
}
server {
listen 443 ssl;
server_name mydomaindotcom;
root /usr/share/nginx/html;
index index.html;
location / {
try_files $uri /index.html
proxy_pass http://mydomaindotcom; #for demo purposes
proxy_set_header Host http://mydomaindotcom;
}
ssl_certificate /etc/letsencrypt/live/mydomaindotcom/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomaindotcom/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
Did you run ./init-letsencrypt.sh?
and changed some files like nginx -> angular
echo "### Starting nginx ..."
docker-compose up --force-recreate -d angular
echo

Resources