React Application within NGINX Docker cannot call API - reactjs

I am currently running into a problem with my react application being served by an NGINX docker container. Here are the details:
My NGINX proxy to my API is working correctly, as I can call it using postman from an external machine. The problem is that I cannot call it from within my frontend. Whenever my frontend makes any request (POST, GET, OPTIONS, etc) into my API, NGINX makes it call 127.0.0.1:8000, which in turn makes the request fail because I am connecting from an external machine which isn't running anything on 127.0.0.1. Even when I set my react application to call the external IP that maps to the proxy, it ends up requesting 127.0.0.1 for some reason.
I don't know if this is an NGINX or a react problem, but I would appreciate any help. I have been trying to solve this issue for quite some time, and even made a previous post that helped me identify the problem correctly, but not the root cause of it.
Here are what my config files look like:
NGINX: (nginx-proxy.conf)
upstream api {
server backend:8000;
}
server {
listen 8080;
server_name 192.168.100.6;
add_header Access-Control-Allow-Origin *;
add_header Access-Control-Allow-Methods "GET, POST, OPTIONS, PUT,";
add_header Access-Control-Allow-Headers "DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range";
add_header Access-Control-Expose-Headers "Content-Length,Content-Range";
location /api/{
resolver 127.0.0.1;
proxy_set_header Host $host;
proxy_pass http://api;
}
# ignore cache frontend
location ~* (service-worker\.js)$ {
add_header 'Cache-Control' 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
expires off;
proxy_no_cache 1;
}
location / {
root /var/www/react-frontend;
try_files $uri $uri/ /index.html;
}
}
Screenshot to firefox networking tab
The image in the link above shows all of my website resources being loaded from 192.168.100.6, but the moment I call my API, the request address changes to 127.0.0.1:8000, despite having the react application call 192.168.100.6/api/token (which does work on postman).

So after a lot of troubleshooting, I have found the problem to my issue.
What actually happened is that all of my codebase was correct, and the proxy was indeed working as intended, but for some reason
docker-compose build
or even:
docker-compose build --no-cache
was not updating my code changes (it was still sending requests to the ip I was using in development).
The answer that I arrived to was to do:
docker volume prune "my-nginx-volume"
and then rebuilding through docker-compose.

Related

Hide upstream 301 response (react app) on nginx

Here's my setup:
nginx server acting as reverse proxy to route all requests at mysite.com - I'm in control of
react app for some subsections of the site on s3-bucket.awsthing.com - I'm not in control of
If you visit s3-bucket.awsthing.com/user/charlie you get a 301 redirect which sends you to s3-bucket.awsthing.com/#!/user/charlie (because that's the index.html where the app is plus some info for routing) in turn returning a 200 ...ok fine.
When a user visits mysite.com/user I have a proxy setup as so
location /user/ {
proxy_pass s3-bucket.awsthing.com/user/;
}
which means that the proxy makes a request to s3-bucket.awsthing.com/user it returns a 301, then redirects the client to s3-bucket.awsthing.com/ ... not so good
While it functions and works, I now have the user exposed to the upstream server and not proxied.
questions: 1) How can I make it not show the upstream server 2) Is there a way to not return a 301 to the client and only the redirected 200 stuff?
I've tried just about everything I can think of other than maybe doing some regex to send the proxy request directly to the /#! route
I found a solution to this:
location / {
proxy_pass mybucket.amazonaws.com;
proxy_intercept_errors on;
error_page 301 =200 #hide-301;
}
location #hide-301 {
proxy_pass mybucket.amazonaws.com;
}

Running Docusaurus with HTTPS=true yields ERR_SSL_PROTOCOL_ERROR

We are making a V2 Docusaurus website.
After building the website in the server, we could well use it with https. Here is a part of my_server_block.conf:
server {
listen 3001 ssl;
ssl_certificate /certs/server.crt;
ssl_certificate_key /certs/server.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://localhost:3002;
proxy_redirect off;
proxy_set_header Host $host:$server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Ssl on;
}
}
In localhost, http works. However, we need to test https in localhost now. But https returns an error, though I started it by HTTPS=true yarn start: This site can’t provide a secure connection localhost sent an invalid response. ERR_SSL_PROTOCOL_ERROR:
Does anyone know what I should do to make https work in localhost?
Edit 1: I tried HTTPs=true SSL_CRT_FILE=certs/server.crt SSL_KEY_FILE=certs/server.key yarn start, https://localhost:3001 still returned the same error. Note that certs/server.crt and certs/server.crt are the files that make https work in our production server via ngnix:
server {
listen 3001 ssl;
ssl_certificate /certs/server.crt;
ssl_certificate_key /certs/server.key;
You are using Nginx, so use it for SSL offloading (your current config) and don't start https on the Docusaurus site. So user in the browser will use https, but Docusaurus will be using http.
If you start https on the Docusaurus site and you will be proxypassing with http proxy_pass http://localhost:3002;, then it is obvious problem - connection with http protocol to https endpoint. You may proxypass with https protocol proxy_pass https://localhost:3002; of course, but that may need more advance configuration. Just keep it simple and use SSL offloading in the Nginx.
There is an issue with https support on localhost in react-dev-utils#^v9.0.3, which is a dependency of docusaurus.
https://github.com/facebook/create-react-app/issues/8075
https://github.com/facebook/create-react-app/pull/8079
It is fixed in react-dev-utils#10.1.0
Docusaurus 2 uses Create React App's utils internally and you might need to specify the path to your cert and key as per the instructions here. I'm not familiar with the server config so I can't help you there.
Maybe this answer will be helpful - How can I provide a SSL certificate with create-react-app?

Can a ReactJS app with a router be hosted on S3 and fronted by an nginx proxy?

I may be twisting things about horribly, but... I was given a ReactJS application that has to be served out to multiple sub-domains, so
a.foo.bar
b.foo.bar
c.foo.bar
...
Each of these should point to a different instance of the application, but I don't want to run npm start for each one - that would be a crazy amount of server resources.
So I went to host these on S3. I have a bucket foo.bar and then directories under that for a b c... and set that bucket up to serve static web sites. So far so good - if I go to https://s3.amazonaws.com/foo.bar/a/ I will get the index page. However most things tend to break from there as there are non-relative links to things like /css/ or /somepath - those break because they aren't smart enough to realize they're being served from /foo.bar/a/. Plus we want a domain slapped on this anyway.
So now I need to map a.foo.bar -> https://s3.amazonaws.com/foo.bar/a/. We aren't hosting our domain with AWS, so I'm not sure if it's possible to front this with CloudFront or similar. Open to a solution along those lines, but I couldn't find it.
Instead, I stood up a simple nginx proxy. I also added in forcing to https and some other things while I had the proxy, something of the form:
server {
listen 443;
server_name foo.bar;
ssl on;
ssl_certificate /etc/pki/tls/certs/server.crt;
ssl_certificate_key /etc/pki/tls/certs/server.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
# Redirect (*).foo.bar to (s3bucket)/(*)
location / {
index index.html index.htm;
set $legit "0";
set $index "";
# First off, we lose the index document functionality of S3 when we
# proxy requests. So we need to add that back on to our rewrites if
# needed. This is a little dangerous, probably should find a better
# way if one exists.
if ($uri ~* "\.foo\.bar$") {
set $index "/index.html";
}
if ($uri ~* "\/$") {
set $index "index.html";
}
# If we're making a request to foo.bar (not a sub-host),
# make the request directly to "production"
if ($host ~* "^foo\.bar") {
set $legit "1";
rewrite /(.*) /foo.bar/production/$1$index break;
}
# Otherwise, take the sub-host from the request and use that for the
# redirect path
if ($host ~* "^(.*?)\.foo\.bar") {
set $legit "1";
set $subhost $1;
rewrite /(.*) /foo.bar/$subhost/$1$index break;
}
# Anything else, give them foo.bar
if ($legit = "0") {
return 302 https://foo.bar;
}
# Peform the actual proxy forward
proxy_pass https://s3.amazonaws.com/;
proxy_set_header Host s3.amazonaws.com;
proxy_set_header Referer https://s3.amazonaws.com;
proxy_set_header User-Agent $http_user_agent;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Accept-Encoding "";
proxy_set_header Accept-Language $http_accept_language;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
sub_filter google.com example.com;
sub_filter_once off;
}
}
This works - I go to a.foo.bar, and I get the index page I expect, and clicking around works. However, part of the application also does an OAuth style login, and expects the browser to be redirected back to the page at /reentry?token=foo... The problem is that path only exists as a route in the React app, and that app isn't loaded by a static web server like S3, so you just get a 404 (or 403 because I don't have an error page defined or forwarded yet).
So.... All that for the question...
Can I serve a ReactJS application from a dumb/static server like S3, and have it understand callbacks to it's routes? Keep in mind that the index/error directives in S3 seem to be discarded when fronted with a proxy the way I have above.
OK, there was a lot in my original question, but the core of it really came down to: as a non-UI person, how do I make an OAuth workflow work with a React app? The callback URL in this case is a route, which doesn't exist if you unload the index.html page. If you're going directly against S3, this is solved by directing all errors to index.html, which reloads the routes and the callback works.
When fronted by nginx however, we lose this error->index.html routing. Fortunately, it's a pretty simple thing to add back:
location / {
proxy_intercept_errors on;
error_page 400 403 404 500 =200 /index.html;
Probably don't need all of those status codes - for S3, the big thing is the 403. When you request a page that doesn't exist, it will treat it as though you're trying to browse the bucket, and give you back a 403 forbidden rather than a 404 not found or something like that. So in this case a response from S3 that results in a 403 will get redirected to /index.html, which will recall the routes loaded there and the callback to /callback?token=... will work.
You can use Route53 to buy domain names and then point them toward your S3 bucket and you can do this with as many domains as you like.
You don't strictly speaking need to touch CloudFront but it's recommended as it is a CDN solution which is better for the user experience.
When deploying applications to S3, all you need to keep in mind is that the code you deploy to it is going to run 100% on your user's browser. So no server stuff.

How To Avoid Mixed Content with Docker Apps

I am running a Django based web application inside a set of Docker containers and I'm trying to include both a REST API (using django-REST-framework) as well as the ReactJS app that consumes it. All my other apps are served over HTTPS but I am running into Mixed Active Content when it comes to the React app hitting the REST API inside the Docker network. The React App is being hosted within my NGINX container and served up as a static site.
Here's the relevant config for my Nginx container:
# SSL Website
server {
listen 443 http2 ssl;
listen [::]:443 http2 ssl;
server_name *.domain.com;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers EECDH+CHACHA20:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
ssl_prefer_server_ciphers on;
ssl_certificate /etc/nginx/ssl/my_cert.crt;
ssl_certificate_key /etc/nginx/ssl/my_key.key;
ssl_stapling on;
ssl_stapling_verify on;
access_log /home/logs/error.log;
error_log /home/logs/access.log;
upstream django {
server web:9000;
}
location /
{
include uwsgi_params;
# Proxy settings
proxy_pass http://django;
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
# REACT APPLICATION
location /faqs {
autoindex on;
sendfile on;
alias /usr/share/nginx/html/faqs;
}
}
The during development the React app was hitting my REST API from outside the network so resources calls used https like so:
axios.get(https://myapp.domain.com/api/)
and everything went relatively smoothly, barring the occasional CORS error.
However, now that both the React and the API are running inside the Docker network NGINX is not involved in the communication between containers and the routes are like so:
axios.get(http://web:9000/api)
This gives me the aggravating Mixed Active Content Error.
I've seen multiple questions similar to this but most are either not using Docker containers or use some NGINX directives I've already got in my config file. Given the popularity of Docker for these kind of loosely coupled applications I would imagine solutions abound for this kind of problem. Sadly I have not managed to come across any and as such, any suggestions would be greatly appreciated.
Since your application includes both an API and a web client from the same end point, you have a "gateway" in nginx that routes all requests to either end point. So far, common practice (although you are missing a load balancer, but that's a different discussion)
All requests to your API should be to https. You should also be serving your static site over https with the same certificate from the same domain. If this isn't the case - there is your problem.
Furthermore, all routes and urls inside your react application should be relative. That means that the react app doesn't need to know what your domain is. Neither should your API ideally although that is sometimes harder to do.
your axios call, given that the react app is served from the same domain over https, should be
axios.get(/api)

Nginx conf for prerender + reverse proxy to django + serving angular in html5 mode

The mouthful of a title says it all:
We've got an Angular frontend with a Django backend providing a REST API that is exposed independently at endpoints example.com/api/v1/*
The Angular app runs in HTML5 mode, and we want hard-links to example.com/foo/bar to bring users into the app at the foo.bar state as if it were a static page rather than an app state (where foo is anything but api).
We're running behind nginx,and our basic strategy in the conf was to define locations at ^~ /scripts, /images etc. for serving static content directly, as well as a ^~ /api/* location that gets routed to django. Below that, we have a location ~ ^/.+$ that matches any path not matched by any of the above and "sends it to Angular" - i.e. serves our index page to it and appends the path to the base url, allowing our angular router to handle it from there.
This is our conf in full:
upstream django {
server 127.0.0.1:8000 fail_timeout=0;
}
server {
listen 80;
server_name example.com;
return 301 https://example.com$request_uri;
}
server {
listen 443;
server_name example.com;
client_max_body_size 10M;
ssl on;
ssl_certificate /etc/ssl/thawte/example_com.crt;
ssl_certificate_key /etc/ssl/thawte/example_com.key;
ssl_verify_depth 2;
gzip on;
gzip_types text/plain text/html application/javascript application/json;
gzip_proxied any;
index index.html;
location ^~ /index.html {
gzip_static on;
root /www/dist;
}
location ^~ /images/ {
expires max;
root /www/dist;
}
location ^~ /scripts/ {
expires max;
gzip_static on;
root /www/dist;
}
location ^~ /favicon.ico {
expires max;
root /www/dist;
}
location ^~ /api {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto https;
proxy_redirect off;
proxy_pass http://django;
}
//Send anything else to angular
location ~ ^/.+$ {
rewrite .* /index.html last;
}
}
This has worked perfectly for us, but we now need to set it up to work with prerender.io. We've tried doing this several ways, making modifications on the official prerender nginx example, but none have worked - crawlers are getting the same code users are rather than cached pages.
How can we get this working?
(note: this is new territory for everyone involved here, so if the best way to handle this involves making different choices a few steps back, please suggest them)
So it turns out the config posted above was working the whole time.
I realized this when it finally occurred to me to try putting https://example.com/anything through the crawler debugger (instead of https://example.com, which is all I had been testing previously), and it worked - the crawler was served the cached page as expected.
This was simply because the greedy quantifier in:
location ~ ^/.+$ {
did not match the empty string. With an additional
location = / {
try_files $uri #prerender;
}
, my conf is working as expected.
Hopefully the handprint on my forehead d only been putting https://example.com through the crawler debugger - which was not working.
On the upside, I'm thinking I can turn this handprint on my forehead into a nice Halloween costume next weekend....
Still not sure I've gone about this the best way, and welcome alternative suggestions.

Resources