react installation in nginx vs K8s ingress vs istio gateway - reactjs

I have a react application installed in a nginx and then an express.js server for the backend connected to a mysql. When a client makes a petition to the x.com/ the default.conf from nginx indicates to pick the files from the local /var/www/build folder, when the path is x.com/api the nginx redirect the call to the express.js server.
upstream client {
server client:3000;
}
upstream api {
server api:3001;
}
server {
listen 80;
#location / {
# proxy_pass http://client;
#}
location / {
root /var/www/build;
try_files $uri /index.html;
}
# location /sockjs-node {
# proxy_pass http://client;
# proxy_http_version 1.1;
# proxy_set_header Upgrade $http_upgrade;
# proxy_set_header Connection "Upgrade";
# }
location /sockjs-node {
root /var/www/build;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location /api {
rewrite /api/(.*) /$1 break;
proxy_pass http://api;
}
}
My question is that now that I put all into containers and in a K8s cluster, I have used an Istio gateway. But in my configuration is just past all traffic in the gateway to the nginx container.
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: rproxygw
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: rproxy
spec:
hosts:
- "*"
gateways:
- rproxygw
http:
- match:
- uri:
prefix: /
route:
- destination:
host: rproxy
port:
number: 80
What then would be better now that all is on K8s cluster with Istio? to just redirect the x.com/api from the gateway?
Is there any way to install the react static files into the Istio gateway and get rid of the nginx proxy?
How about getting rid of the nginx as reverse proxy and just use the Istio gateway and to install the react app into another express server or just reuse the express server in which the backend is running to install as well the react static files?
what option would perform best in terms of latency?

Is there any way to install the react static files into the Istio gateway
No. It only forwards requests to Kubernetes Services.
Any of the various approaches you describe will work fine. Nginx is fairly efficient; all else being equal fewer hops are better. If it turns out your application is easier to manage keeping the Nginx reverse proxy, there's nothing wrong with keeping it. If your front- and back-end code are in the same repository and it's straightforward to build them into the same container image then similarly there's nothing wrong with having a single process serving both parts.

Related

WebSocketClient.js:16 WebSocket connection to 'ws://localhost:3000/ws' failed: React, Docker, NGINX

Here's the issue... when I start a React app locally as npm start. I don't have a ws failed connection.
If I start NGINX and React servers within Docker containers I constantly get
WebSocketClient.js:16 WebSocket connection to 'ws://localhost:3000/ws' failed:
default.conf
upstream client {
server client:3000;
}
upstream api {
server api:5000;
}
server {
listen 80;
location / {
proxy_pass http://client;
}
location /ws {
proxy_pass http://client;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location /api {
rewrite /api/(.*) /$1 break;
proxy_pass http://api;
}
}
Add this to .env:
WDS_SOCKET_PORT=0
See this issue for more explanation and information: https://github.com/facebook/create-react-app/issues/11897
I faced the same issue. One simple fix is to map the nginx instance to port 3000 on your local machine. Whereever you do post mapping for nginx change it to 3000:80.
Now requests made to 'ws://localhost:3000/ws' by the react app will be appropiately routed.
You could run https without docker and http with docker.
So you should use wss and ws accordingly.
This was my issue.
For me, at first adding this line to .env (as #sarcouilleizi94 mentioned) solved the problem
WDS_SOCKET_PORT=0
then (in the same project) unexpectedly it stopped working and I had to change it to:
WDS_SOCKET_PORT=3000
I hope this can help.

How to use another docker container as a subdirectory of main website?

I am new to docker and container concepts, I want to host a react website xyz.com with a container at port:3000 and I want to add Admin subdirectory into it like xyz.com/Admin in which main website(xyz.com) is one container and /Admin must be another one (Total 2 containers). Please help me how can I figure this out (like changes in Dockerfile, code or in docker-compose).
What you need for this is a reverse proxy. The reverse proxy will stand in front of the two web applications, and map the appropriate paths to the appropriate containers.
I have a simple example here for you, using docker-compose and nginx. It starts three nginx containers, one acting as the proxy, and the two others just acting as your web applications (root and admin)
Project structure:
/simple-proxy-two-websites
├── docker-compose.yml
├── default.conf
In the docker-compose definition, we map the config into the reverse-proxy and map the listening port (80) to the host machine port 80. And then we just set up two default nginx containers to act as the web applications that we want to serve.
docker-compose.yml
version: "3"
services:
reverse-proxy:
image: nginx
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf:ro
depends_on:
- "web-admin"
- "web-root"
ports:
- 80:80
web-root:
image: nginx
web-admin:
image: nginx
In the nginx reverse-proxy server configuration (taken from the default config that is shipped with the docker image, with all commented lines removed) we then add location /admin and change location / as seen below. Notice that the proxy_pass parameter is a URL that uses the servicename defined in the docker-compose definition above. Docker is making this easy, when the containers are on the same network (in this case the default bridge network), by allowing us to use the service names.
default.conf
server {
listen 80;
listen [::]:80;
server_name localhost;
location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://web-root:80/;
}
location /admin {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://web-admin:80/;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
With this configuration the nginx reverse-proxy will do an internal network forward of the request to the proxy_pass destination defined in the /location - the destination does not have to be reachable from the outside.
you can take this example and update it with your servicenames, and specific ports - it should get you going.
I also have a complete example, where I override the default index.html pages of the web-admin and web-root containers, to verify that the correct destination has been reached. Let me know if you want that, then I will make it available in a repository on GitHub.

Google App Engine flex CORS configuration

I have a site (A) hosted in App Engine that needs to be accessed by proxy_pass by another site (B) hosted somewhere else.
Previously this site (A) was hosted in Kubernetes and the ingress configuration looked like this, and it worked perfectly:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: webapp
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: my-webapp.com
http:
paths:
- path: /
backend:
serviceName: webapp
servicePort: 80
- host: www.remote-server.com
http:
paths:
- path: /
backend:
serviceName: webapp
servicePort: 80
This way, by adding www.remote-server.com to the list of hosts, then www.remote-server.com was allowed to render my-webapp.com through a nginx proxy_pass.
Now my question is how do we configure the same thing in App Engine flexible environment (nodejs runtime)? Currently this what we get if we try this in App Engine without any special configuration:
That is because a remote server (in that case it's localhost for testing), is not allowed to proxy to the App Engine service (That's my assumption anyway).
For reference, this is the nginx configuration I'm using locally for testing this:
server {
listen 8080;
server_name localhost;
root /path/to/folder/;
location / {
index index.html index.htm;
}
location /shopping {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://url-to-my-appengine-app.appspot.com;
proxy_redirect default;
}
}
I hope that makes sense.
I don't believe the underlying Docker image for the Node.js runtime on App Engine flexible environment uses NGINX as the web server so you're likely pursuing a path that won't work when you deploy even if you get it to work locally.
I believe to enable CORS support in your situation, you are going to have to set the Access-Control-Allow-Origin header within your application's code rather than within a configuration file.

How can I access my containerized create-react-app if I bind a HOST to it?

I need to bind a HOST to my React app on development so that I can proxy to other services in the cloud. Let's say the host is local.myapp.example.com.
I've been running my e2e tests by starting up a couple of containers including one with an instance of my React app and then making Puppeteer do requests to it. Up to this point, the app has been accessible through localhost just by exposing the port:
// docker-compose.e2e.yml
- ports:
- 8080:3000 # app is running on 3000 inside the container.
Now that I've bound it to the HOST above, I cannot access the app inside the container. I have updated my laptop's /etc/hosts to have:
// /etc/hosts - laptop
0.0.0.0 local.myapp.example.com
With this, it works when I run the app in my laptop, but it doesn't when I run it inside the container
What am I missing?
Update 1
If I go inside the container, I can run curl local.myapp.example.com:3000 and it works.
From the other container (the one with Puppeteer) I don't know what URL to use to hit it. Before adding the HOST I would just use the name of the docker container like http://frontend:3000, but now I don't know, as the URL doesn't work
Update 2
Here's my docker-compose file. I didn't mention it before because I didn't want to ask an overcomplicated question, but since I'm posting the docker-compose, might as well: the container is behind a reverse proxy:
version: '3'
services:
nginx:
container_name: nginx
image: nginx
depends_on:
- frontend
- backend
volumes:
- ./frontend.conf:/etc/nginx/conf.d/frontend.conf
ports:
- "9520:8080"
frontend:
container_name: frontend
build:
context: ..
dockerfile: Dockerfile.e2e
depends_on:
- backend
backend:
container_name: backend
image: my.private.registry/user/backend:latest
// reverse proxy conf
server {
listen 8080;
location /api {
proxy_pass http://backend:3000/api;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location / {
# proxy_pass http://frontend:3000;
proxy_pass https://frontend:3000;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
From Chrome I used to hit http://localhost:9520. Now that I need to bind a HOST (and also I need HTTPS=true on my create-react-app), I need to hit http://local.myapp.example.com:9520 from Chrome (not sure if https need to go here?).
From within the reverse proxy container, I can do curl --insecure --header 'Host: local.myapp.example.com' https://frontend:3000 and it resolves.
From Chrome, I try to hit both http and https local.myapp.example.com:9520 but it doesn't work.
From Postman, it works if I do http://local.myapp.example.com:9520
Summary
I need to be able to hit https://local.myapp.example.com:9520 from Chrome (or Puppeteer) on my laptop, it should go to the reverse proxy container on port 8080. The reverse proxy then will proxy_pass it to the frontend container on port 3000.

SailsJS as API and Nginx: Restrict external access

I'm running SailsJS on a digitalocean droplet (MEAN Stack with nginx). All my requests are mapped to my Angular frontend except those on /api which are mapped to a proxy_pass on port 1337 (on which Sails runs). This procedure works fine.
Now I'd like to restrict the access to my API to only allow requests from my frontend. I already tried to deny / allow from within my nginx config but this blocks the the user request itself. I tried several answers like this as well but they didn't work out.
What would be the recommended way to limit access to my Sails API to localhost? I'd like to run multiple apps on my droplet and use Sails as my API that should only be accessible by the apps in my droplet.
My nginx config:
upstream sails_server {
server 127.0.0.1:1337;
keepalive 64;
}
server {
server_name domain.com;
index index.html;
location / {
root /opt/domain/build;
try_files $uri $uri/ /index.html;
}
location /api {
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
client_max_body_size 500M;
}
}
– Thanks in advance!
I think you can't do this because angular runs in your client, so you need to get IP from all you users. You can use something simple that works with trustes proxys
var ip = req.headers['x-forwarded-for'] || req.connection.remoteAddress
or use some more complex and trusted like link

Resources