I'm trying to set up a microservices architecture for a personal blog.
The idea is to have an NGINX container serving up a static gatsby site, and to redirect to other services. For example, I'd like to have a react app at /todos, and an api for that todo app at /todos_api.
My current folder structure is like this:
docker-compose.yml
gatsby_blog
(contains a build folder)
nginx
default.conf (this is my main nginx entry)
portfolio
todos
todo_client
nginx
default.conf (this is just for serving the react app)
todo_api
My docker-compose file looks like this:
version: "3"
services:
gatsby:
restart: always
build:
dockerfile: Dockerfile
context: ./gatsby_blog
ports:
- "80:80"
todoclient:
build:
dockerfile: Dockerfile
context: ./portfolio/todos/todo_client
My main Gatsby nginx file looks like this:
upstream todoclient {
server todoclient:3000;
}
server {
listen 80;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
location /todos {
rewrite /todos/(.*) /$1 break;
proxy_pass http://todoclient;
}
}
and my React nginx config is like this:
server {
listen 3000;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
}
The issue I'm pretty certain is with my nginx configs. When I go to localhost I'm met with the gatsby app, but if I go to /todos I get an nginx error. I can see that the request is passed on to the todoclient container correctly, but the error returned is:
open() "/usr/share/nginx/html/todos" failed (2: No such file or directory
If anyone can see where I'm going wrong with the nginx configs I'd really appreciate it. I can post my Dockerfiles too if needed.
Thanks
EDIT
I've managed to get the proxy working now, but the issue is that the todos app cant find its static files. They're in the correct place in the container, and the container works in isolation, so the issue is to do with docker-compose and the nginx proxying.
Done it for angular. The nginx file should replace the file in /etc/nginx/nginx.conf. And place the todos folder - assuming that has the static pages - in /usr/share/nginx/html. You can try one service at a time
http {
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
include /etc/nginx/mime.types;
location /todos {
alias /usr/share/nginx/html/todos/;
absolute_redirect off;
rewrite ^(.+)/todos/+$ $1 permanent;
rewrite ^(.+)/todos/index.html$ $1 permanent;
try_files $uri$args $uri$args/ $uri/ /todos/index.html;
}
}
}
Related
I have a Docker container running a production React app alongside Nginx for hosting its static files.
Dockerfile
FROM node:16.5.0-alpine AS builder
WORKDIR /app
COPY . .
ENV PUBLIC_URL /trade-journal
RUN npm ci --production
RUN npm run build
FROM nginx:1.23.1-alpine AS production
ENV NODE_ENV production
COPY --from=builder /app/build /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
nginx.conf
server {
listen 80;
location / {
# static file hosting location
root /usr/share/nginx/html/;
include /etc/nginx/mime.types;
try_files $uri $uri/ /index.html;
}
}
Above, you can see that I'm setting a PUBLIC_URL environment variable. This is because I'd like the app to be hosted on a subpath, like this: domain.com/trade-journal. I'm using Traefik to route to this subpath because in the future, I'd like to add more subpaths for other apps.
docker-compose.yml
reverse-proxy:
image: traefik:v2.8
command: --api.insecure=true --providers.docker
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
labels:
- traefik.enable=false
frontend:
build:
context: ./trade-journal/client
dockerfile: ./Dockerfile
image: "trade-journal-client"
labels:
- traefik.http.routers.frontend.rule=Host(`[MY-PUBLIC-IP-HERE]`) && PathPrefix(`/trade-journal`)
- traefik.http.services.frontend.loadbalancer.server.port=80
links:
- "backend:be"
I understand that React is designed to run on root by default, so I added:
<base href="%PUBLIC_URL%/"> in public/index.html, and
basename={process.env.PUBLIC_URL} to BrowserRouter
I've also alternatively tried setting the subpath with homepage in package.json, and writing the basename as just /trade-journal.
Either way, I get the same error in my browser on a blank page:
For whatever reason, it can't load http://domain/trade-journal/static/js/main.93967f03.js. Actually, I can't even view the code in that file, or any other JS or CSS files in that directory through the browser (even though I can when I bin/sh into the running container); I only ever get a blank index file.
I assume that the problem has to do with routing because when I remove any having to do with the subpath and host on /, the React app loads correctly.
I'm very new to hosting and deploying. How can I solve this problem?
It seems that I fixed it. I needed to use alias, and include the subpath in nginx as well. Now, I'm going to see if I can make this solution any more DRY (so I don't have to repeat the subpath so many times). If anyone has any suggestions, let me know.
location /trade-journal {
# static file hosting location
alias /usr/share/nginx/html/;
include /etc/nginx/mime.types;
try_files $uri $uri/ /trade-journal/index.html;
}
I am using create-react-app to build a bunch of react apps for various modules in my application, for ex: Users, Leads, Campaigns, etc.
I have containerized all the apps using Docker and NGINX.
I would like to have one nginx gateway which will redirect to my apps based on the path.
For example:
https://www.example.com/users --> users app
https://www.example.com/products --> products app
I tried setting this up based on various articles I came across on the internet, but with no success. I basically have two problems:
Referencing static files - Create React app usually injects a script tag such as <script src="/static/....."></script> in the index.html file, which prevents the HTML from loading the scripts (as it looks for them in the gateway's root directory). I was able to fix that by setting the build script to use a PUBLIC_URL variable set to /users or /products as required.
After I set PUBLIC_URL and built the container, now NGINX for some reason gives me a 301 Moved permanently response and doesn't proxy properly.
I am not sure what I am missing. I'm sure this is a very common use case.
For each of my react apps, I have created a Dockerfile as follows
FROM node:16-alpine as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY . ./
RUN npm install
RUN npm run build
# production environment
FROM nginx:stable-alpine
COPY --from=build /app/build /usr/share/nginx/html
COPY nginx/nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
And an nginx.conf as follows:
server {
listen 80;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
add_header Access-Control-Allow-Origin *;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
client_max_body_size 100M;
}
Here is the nginx.conf for my gateway:
server {
listen 80;
location /products/ {
proxy_pass http://products/;
}
location /users/ {
proxy_pass http://users/;
}
location /sales/ {
proxy_pass http://sales/;
}
}
And finally, my docker-compose.yml
version: "3.9"
networks:
ylib:
services:
nginx:
image: nginx
container_name: nginx
ports:
- 8888:80
networks:
- ylib
products:
image: products
container_name: products
ports:
- 8881:80
networks:
- ylib
sales:
image: sales
container_name: sales
ports:
- 8882:80
networks:
- ylib
orders:
image: orders
container_name: orders
ports:
- 8883:80
networks:
- ylib
Please help me out.
Thanks
After coding my first website on React, I want to host it on a raspberry pi using nginx.
I have checked website is ok by npm start:
I then built it to create static files using npm run build to create the following files in ~/Documents/myWebsite1/build:
asset-manifest.json css favicon.ico files images index.html js manifest.json resumeData.json robots.txt static
After this I installed nginx, deleted default in both /etc/nginx/sites-available and /etc/nginx/sites-enabled then added the following file in /etc/nginx/sites-available:
server {
listen 80;
server_name localhost;
root ~/Documents/myWebsite1/build;
index index.html index.htm;
location / {
try_files $uri /index.html =404;
}}
nginx -t confirms syntax is ok.
nginx -T confirms only this server block is running.
When I go to my IP address, the page just reads 404 Not Found.
I have checked the logs using sudo tail -n 20 /var/log/nginx/error.log which return:
2022/01/10 18:01:25 [notice] 15929#15929: signal process started
Any ideas as to what I might be doing wrong?
Cheers,
Will
Nginx was trying to access the directory ~/Documents/myWebsite1/build but ~ is only a shell shortcut. Changing to an absolute path fixed it right away :)
I've got React app.
It works well on local machine (app + prerender-spa-plugin). I run it with command http-server into ./build package
However thing go wrong on server - it acts like if I launch it with serve-s command.
There is docker with nginx image on server.
I tried to reconfigure nginx the way that it uses different index.html for different URLs, but fail again
Do the problem with routing to directories that keeps static images?
How it could be resolved? or where I could find information about it?
You have to create virtual host on nginx server and point it to build folder of the app. Don't forget to run npm run build.
Simple nginx config
server {
listen 80;
listen [::]:80;
root /var/www/reactjsapp/build;
index index.html index.htm;
server_name reactjsapp.com;
location / {
try_files $uri /index.html;
}
location ~ /\.ht {
deny all;
}
}
I decided it.
My config
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
From official documentation:"It is possible to check directory’s existence by specifying a slash at the end of a name, e.g. “$uri/”."
https://i.stack.imgur.com/PusBE.png
I have a REST back-end service located on some server and front-end application made in angular.
I'm using Angular CLI to build application. Location of my back-end server is located in environment file.
Requirement for my app is that I'm providing two docker images. One with my back-end server (Java Spring Boot app) and the second one is static html build with ng build myApp command. Then i copy content od dist directory to proper directory on docker image as is shown here Nginx docker image.
Problem is, that back-end and front-end may work on different servers. Is there any way i can configure my front-end app that i can change back-end server location as per start of container?
I know this is an old question, but I faced the exact same problem and it took me a while to solve. May this be of help to those coming from search engines.
I found 2 solutions (I ended up choosing the second one). Both allow you to use environment variables in docker to configure your API URL.
Solution 1 ("client"-side): env.js asset + sed
The idea is to have your Angular client load an env.js file from your HTTP server. This env.js will contain the API URL, and will be modifiable by your container when it starts. This is what you discussed in the question comments.
Add an env.js in your angular app assets folder (src/assets for me with angular-cli):
var MY_APP_ENV = {
apiUrl: 'http://localhost:9400',
}
In your index.html, you will load your env:
<head>
<meta charset="utf-8">
<base href="/">
<script src="env.js"></script>
</head>
In your environment.ts, you can use the variable:
declare var MY_APP_ENV: any;
export const environment = {
production: false,
apiUrl: MY_APP_ENV.apiUrl
};
In your NGINX Dockerfile do:
FROM nginx:1.11-alpine
COPY tmp/dist /usr/share/nginx/html
COPY run.sh /run.sh
CMD ["sh", "/run.sh"]
The run.sh script is where the sed magic happens:
#!/bin/sh
# API
/bin/sed -i "s|http://localhost:9400|${MY_API_URL}|" /usr/share/nginx/html/env.js
nginx -g 'daemon off;'
In your angular services, use environment.apiUrl to connect to the API (you need to import environment, see Angular 2 docs).
Solution 2 (purely server side): nginx proxy config + envsubst
I wasn't happy with the previous solution because the API URL needed to be from the host point of view, it couldn't use another container hostname in my docker-compose setup.
So I thought: many people use NGINX as a proxy server, why not proxy /api to my other container this way.
Dockerfile:
FROM nginx:1.11-alpine
COPY tmp/dist /usr/share/nginx/html
COPY frontend.conf.template /etc/nginx/conf.d/frontend.conf.template
COPY run.sh /run.sh
CMD ["/bin/sh", "/run.sh"]
frontend.conf.template:
server {
listen 80;
server_name myserver;
# API Server
location /api/ {
proxy_pass ${MY_API_URL}/;
}
# Main
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri$args $uri$args/ /index.html;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
run.sh:
#!/bin/sh
# Substitute env vars
envsubst '$MY_API_URL' \
< /etc/nginx/conf.d/frontend.conf.template \
> /etc/nginx/conf.d/default.conf
# Start server
nginx -g 'daemon off;'
envsubt allows you to substitute environment variables in a string with a shell-like syntax.
Then use /api/xyz to connect to the API from the Angular app.
I think the second solution is much cleaner. The API URL can be the API docker container name in a docker-compose setup, which is nice. The client is not involved, it is transparent. However, it depends on NGINX.