Hide upstream 301 response (react app) on nginx - reactjs

Here's my setup:
nginx server acting as reverse proxy to route all requests at mysite.com - I'm in control of
react app for some subsections of the site on s3-bucket.awsthing.com - I'm not in control of
If you visit s3-bucket.awsthing.com/user/charlie you get a 301 redirect which sends you to s3-bucket.awsthing.com/#!/user/charlie (because that's the index.html where the app is plus some info for routing) in turn returning a 200 ...ok fine.
When a user visits mysite.com/user I have a proxy setup as so
location /user/ {
proxy_pass s3-bucket.awsthing.com/user/;
}
which means that the proxy makes a request to s3-bucket.awsthing.com/user it returns a 301, then redirects the client to s3-bucket.awsthing.com/ ... not so good
While it functions and works, I now have the user exposed to the upstream server and not proxied.
questions: 1) How can I make it not show the upstream server 2) Is there a way to not return a 301 to the client and only the redirected 200 stuff?
I've tried just about everything I can think of other than maybe doing some regex to send the proxy request directly to the /#! route

I found a solution to this:
location / {
proxy_pass mybucket.amazonaws.com;
proxy_intercept_errors on;
error_page 301 =200 #hide-301;
}
location #hide-301 {
proxy_pass mybucket.amazonaws.com;
}

Related

React Application within NGINX Docker cannot call API

I am currently running into a problem with my react application being served by an NGINX docker container. Here are the details:
My NGINX proxy to my API is working correctly, as I can call it using postman from an external machine. The problem is that I cannot call it from within my frontend. Whenever my frontend makes any request (POST, GET, OPTIONS, etc) into my API, NGINX makes it call 127.0.0.1:8000, which in turn makes the request fail because I am connecting from an external machine which isn't running anything on 127.0.0.1. Even when I set my react application to call the external IP that maps to the proxy, it ends up requesting 127.0.0.1 for some reason.
I don't know if this is an NGINX or a react problem, but I would appreciate any help. I have been trying to solve this issue for quite some time, and even made a previous post that helped me identify the problem correctly, but not the root cause of it.
Here are what my config files look like:
NGINX: (nginx-proxy.conf)
upstream api {
server backend:8000;
}
server {
listen 8080;
server_name 192.168.100.6;
add_header Access-Control-Allow-Origin *;
add_header Access-Control-Allow-Methods "GET, POST, OPTIONS, PUT,";
add_header Access-Control-Allow-Headers "DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range";
add_header Access-Control-Expose-Headers "Content-Length,Content-Range";
location /api/{
resolver 127.0.0.1;
proxy_set_header Host $host;
proxy_pass http://api;
}
# ignore cache frontend
location ~* (service-worker\.js)$ {
add_header 'Cache-Control' 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
expires off;
proxy_no_cache 1;
}
location / {
root /var/www/react-frontend;
try_files $uri $uri/ /index.html;
}
}
Screenshot to firefox networking tab
The image in the link above shows all of my website resources being loaded from 192.168.100.6, but the moment I call my API, the request address changes to 127.0.0.1:8000, despite having the react application call 192.168.100.6/api/token (which does work on postman).
So after a lot of troubleshooting, I have found the problem to my issue.
What actually happened is that all of my codebase was correct, and the proxy was indeed working as intended, but for some reason
docker-compose build
or even:
docker-compose build --no-cache
was not updating my code changes (it was still sending requests to the ip I was using in development).
The answer that I arrived to was to do:
docker volume prune "my-nginx-volume"
and then rebuilding through docker-compose.

Serving React frontend and Flask backend using Nginx as a reverse proxy

I've been trying to set up a React frontend and a Flask backend using Nginx as a reverse proxy to differentiate the two. I have the flask backend running a Gunicorn server on localhost:5000, but I can't seem to get the Nginx location block to register it. My config file looks like this:
server {
listen 80;
root /var/www/[react-app-domain-name]/html;
index index.html index.htm;
access_log /var/log/nginx/reverse-access.log;
error_log /var/log/nginx/reverse-error.log;
location / {
try_files $uri $uri/ = 404;
}
location /api {
include proxy_params;
proxy_pass http://localhost:5000;
}
}
My understanding is that this should route all traffic through my react app at root, except for requests that have "/api", which should then be routed through my backend Flask api. However, when I try to access a /api route, all I get back is a 404 response. This also happens if I true to access it through curl on the command line.
Here's the 404 error log that I have:
2020/09/09 21:03:05 [crit] 36926#36926: *114 connect() to unix:/home/[name]/backend/backend.sock failed (2: No such file or directory) while connecting to upstream, client: 127.0.0.1, server: , request: "GET /api/ HTTP/1.0", upstream: "http://unix:/home/[name]/backend/backend.sock:/api/", host: "[hostname]"
Any help would be very much appreciated. I'm tearing my hair out here. Thanks.

Hosting Docusaurus v2 using Nginx

I'm attempting to use nginx as the reverse proxy to host Docusaurus v2 on Google AppEngine.
GooglAppEngine has HTTPS turned on. And Nginx listens on port 8080. Hence by default all requests are over HTTPS and the connections managed by Google AppEngine.
However, I'm having an issue when users perform the following actions :
Reach the landing page
Go to documentations (any page).
Refresh the page.
The user is getting directed to port 8080 and not the https site of docusaurus.
Without refreshing the page, the user is able to successfully navigate the site. It's when the user hits a refresh button that they get the redirect. Looking at the header information, I see the response pointing them to port 8080 but I'm not sure why that is happening.
Wondering if anyone has successfully been able to set up Docusaurus v2 with nginx ?
My config for nginx is as follow :
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Logs will appear on the Google Developer's Console when logged to this
# directory.
access_log /var/log/app_engine/app.log;
error_log /var/log/app_engine/app.log;
gzip on;
gzip_disable "msie6";
server {
# Google App Engine expects the runtime to serve HTTP traffic from
# port 8080.
listen 8080;
root /usr/share/nginx/www;
index index.html index.htm;
location / {
if ($http_x_forwarded_proto = "http") {
return 301 https://$server_name$request_uri;
}
}
}
This is probably due to the docusaurus website linking to directories without trailing slash /, causing a redirect which is setup to include the port by default.
Looking into the docusaurus build directory you will see that your pages are defined as folders containing index.html files. Without the / the server needs to redirect you to {page}/index.html.
Try to call the URL with / and no port, which should be successful:
https://{host}/docs/{page}/
Therefore fixing the problem, you could try to change the redirect rules to not include the port with the port_in_redirect parameter:
server {
listen 8080;
port_in_redirect off;
# More configuration
...
}
See the documentation for more details.

Can a ReactJS app with a router be hosted on S3 and fronted by an nginx proxy?

I may be twisting things about horribly, but... I was given a ReactJS application that has to be served out to multiple sub-domains, so
a.foo.bar
b.foo.bar
c.foo.bar
...
Each of these should point to a different instance of the application, but I don't want to run npm start for each one - that would be a crazy amount of server resources.
So I went to host these on S3. I have a bucket foo.bar and then directories under that for a b c... and set that bucket up to serve static web sites. So far so good - if I go to https://s3.amazonaws.com/foo.bar/a/ I will get the index page. However most things tend to break from there as there are non-relative links to things like /css/ or /somepath - those break because they aren't smart enough to realize they're being served from /foo.bar/a/. Plus we want a domain slapped on this anyway.
So now I need to map a.foo.bar -> https://s3.amazonaws.com/foo.bar/a/. We aren't hosting our domain with AWS, so I'm not sure if it's possible to front this with CloudFront or similar. Open to a solution along those lines, but I couldn't find it.
Instead, I stood up a simple nginx proxy. I also added in forcing to https and some other things while I had the proxy, something of the form:
server {
listen 443;
server_name foo.bar;
ssl on;
ssl_certificate /etc/pki/tls/certs/server.crt;
ssl_certificate_key /etc/pki/tls/certs/server.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
# Redirect (*).foo.bar to (s3bucket)/(*)
location / {
index index.html index.htm;
set $legit "0";
set $index "";
# First off, we lose the index document functionality of S3 when we
# proxy requests. So we need to add that back on to our rewrites if
# needed. This is a little dangerous, probably should find a better
# way if one exists.
if ($uri ~* "\.foo\.bar$") {
set $index "/index.html";
}
if ($uri ~* "\/$") {
set $index "index.html";
}
# If we're making a request to foo.bar (not a sub-host),
# make the request directly to "production"
if ($host ~* "^foo\.bar") {
set $legit "1";
rewrite /(.*) /foo.bar/production/$1$index break;
}
# Otherwise, take the sub-host from the request and use that for the
# redirect path
if ($host ~* "^(.*?)\.foo\.bar") {
set $legit "1";
set $subhost $1;
rewrite /(.*) /foo.bar/$subhost/$1$index break;
}
# Anything else, give them foo.bar
if ($legit = "0") {
return 302 https://foo.bar;
}
# Peform the actual proxy forward
proxy_pass https://s3.amazonaws.com/;
proxy_set_header Host s3.amazonaws.com;
proxy_set_header Referer https://s3.amazonaws.com;
proxy_set_header User-Agent $http_user_agent;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Accept-Encoding "";
proxy_set_header Accept-Language $http_accept_language;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
sub_filter google.com example.com;
sub_filter_once off;
}
}
This works - I go to a.foo.bar, and I get the index page I expect, and clicking around works. However, part of the application also does an OAuth style login, and expects the browser to be redirected back to the page at /reentry?token=foo... The problem is that path only exists as a route in the React app, and that app isn't loaded by a static web server like S3, so you just get a 404 (or 403 because I don't have an error page defined or forwarded yet).
So.... All that for the question...
Can I serve a ReactJS application from a dumb/static server like S3, and have it understand callbacks to it's routes? Keep in mind that the index/error directives in S3 seem to be discarded when fronted with a proxy the way I have above.
OK, there was a lot in my original question, but the core of it really came down to: as a non-UI person, how do I make an OAuth workflow work with a React app? The callback URL in this case is a route, which doesn't exist if you unload the index.html page. If you're going directly against S3, this is solved by directing all errors to index.html, which reloads the routes and the callback works.
When fronted by nginx however, we lose this error->index.html routing. Fortunately, it's a pretty simple thing to add back:
location / {
proxy_intercept_errors on;
error_page 400 403 404 500 =200 /index.html;
Probably don't need all of those status codes - for S3, the big thing is the 403. When you request a page that doesn't exist, it will treat it as though you're trying to browse the bucket, and give you back a 403 forbidden rather than a 404 not found or something like that. So in this case a response from S3 that results in a 403 will get redirected to /index.html, which will recall the routes loaded there and the callback to /callback?token=... will work.
You can use Route53 to buy domain names and then point them toward your S3 bucket and you can do this with as many domains as you like.
You don't strictly speaking need to touch CloudFront but it's recommended as it is a CDN solution which is better for the user experience.
When deploying applications to S3, all you need to keep in mind is that the code you deploy to it is going to run 100% on your user's browser. So no server stuff.

Reading post body in Nginx is downloading a DMS file

Following is my scenario:
When user hits http://mypage.sso.com (my sso endpoint) it will authenticate the user and will do a POST request to my site (https://mypage.com) with the authentication token in POST body.
Iam trying to read the POST body in nginx and store the $response_body in cookie and start my angular application(i.e index.html).
I have done the following configuration in my Nginx conf file (Nginx version 1.11.2)
location / {
root /app/UI/dist/public/;
index index.html;
proxy_http_version 1.1;
proxy_set_header Host 127.0.0.1;
proxy_pass $scheme://127.0.0.1:80/auth;
add_header Set-Cookie lcid='$request_body';
error_page 405 =200 $uri;
}
location /auth{
return 200;
}
Now when user hits mypage.sso.com its downloading a DMS file and not redirecting to my page(mypage.com). But I see that cookie is properly set with auth token(taken from post body)(I could see this with already opened mypage.com). When I remove
proxy_pass $scheme://127.0.0.1:80/auth; from my Nginx conf sso endpoint is properly redirecting to mypage.com but the $request_body is empty and hence my cookie is also set to empty value.
What change should I do to properly set my cookie with POST body data and redirect to mypage.com and avoid downloading DMS file.

Resources