I'm attempting to use nginx as the reverse proxy to host Docusaurus v2 on Google AppEngine.
GooglAppEngine has HTTPS turned on. And Nginx listens on port 8080. Hence by default all requests are over HTTPS and the connections managed by Google AppEngine.
However, I'm having an issue when users perform the following actions :
Reach the landing page
Go to documentations (any page).
Refresh the page.
The user is getting directed to port 8080 and not the https site of docusaurus.
Without refreshing the page, the user is able to successfully navigate the site. It's when the user hits a refresh button that they get the redirect. Looking at the header information, I see the response pointing them to port 8080 but I'm not sure why that is happening.
Wondering if anyone has successfully been able to set up Docusaurus v2 with nginx ?
My config for nginx is as follow :
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Logs will appear on the Google Developer's Console when logged to this
# directory.
access_log /var/log/app_engine/app.log;
error_log /var/log/app_engine/app.log;
gzip on;
gzip_disable "msie6";
server {
# Google App Engine expects the runtime to serve HTTP traffic from
# port 8080.
listen 8080;
root /usr/share/nginx/www;
index index.html index.htm;
location / {
if ($http_x_forwarded_proto = "http") {
return 301 https://$server_name$request_uri;
}
}
}
This is probably due to the docusaurus website linking to directories without trailing slash /, causing a redirect which is setup to include the port by default.
Looking into the docusaurus build directory you will see that your pages are defined as folders containing index.html files. Without the / the server needs to redirect you to {page}/index.html.
Try to call the URL with / and no port, which should be successful:
https://{host}/docs/{page}/
Therefore fixing the problem, you could try to change the redirect rules to not include the port with the port_in_redirect parameter:
server {
listen 8080;
port_in_redirect off;
# More configuration
...
}
See the documentation for more details.
Related
I have an application that has a React frontend and a Python Flask backend. The frontend communicates with the server to perform specific operations and the server api should only be used by the client.
I have deployed the whole application (Client and Server) to an Ubuntu virtual machine. The machine only has specific ports open to the public (5000, 443, 22). I have setup Nginx configuration and the frontend can be access from my browser via http://<ip:address>:5000. The server is running locally on a different port, 4000, which is not accessible to the public as designed.
The problem is when I access the client app and I navigate to the pages that communicate with the server via http://127.0.0.1:4000 from the react app, I get an error saying connection was refused.
GET http://127.0.0.1:4000/ net::ERR_CONNECTION_REFUSED on my browser.
When I ssh into the vm and run the same command through curl curl http://127.0.0.1:4000/, I get a response and everything works fine.
Is there a way I can deploy the server in the same vm such that when I access the client React App from my browser, the React App can access the server without problems?
So after tinkering with this, I found a solution using Nginx. Summary is you run the server locally and use a different port say 4000 (not exposed to public), then expose your react app on the exposed port in this case 5000.
Then use a proxy in your Nginx config that redirects any call starting with api to the local host server running. See config below
server {
#Exposed port and servername ipaddress
listen 5000;
server_name 1.2.3.4 mydomain.com;
#SSL
ssl on;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
ssl_protocols TLSv1.2;
#Link to the react build
root /var/www/html/build;
index index.html index.htm;
#Error and access logs for debugging
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
try_files $uri /index.html =404;
}
#Redirect any traffic beginning with /api to the flask server
location /api {
include proxy_params;
proxy_pass http://localhost:4000;
}
}
Now this means you need to have all your server endpoints begin with /api/... and the user can also access the endpoint from the browser via http://<ip:address>:5000/api/endpoint
You can mitigate this by having your client send a token the server and the server will not run any commands without that token/authorization.
I found the solution here and modified it to fit my specific need here
Part two of solution
Other series in the solution can be found Part one of solution and Part three of solution
Here's my setup:
nginx server acting as reverse proxy to route all requests at mysite.com - I'm in control of
react app for some subsections of the site on s3-bucket.awsthing.com - I'm not in control of
If you visit s3-bucket.awsthing.com/user/charlie you get a 301 redirect which sends you to s3-bucket.awsthing.com/#!/user/charlie (because that's the index.html where the app is plus some info for routing) in turn returning a 200 ...ok fine.
When a user visits mysite.com/user I have a proxy setup as so
location /user/ {
proxy_pass s3-bucket.awsthing.com/user/;
}
which means that the proxy makes a request to s3-bucket.awsthing.com/user it returns a 301, then redirects the client to s3-bucket.awsthing.com/ ... not so good
While it functions and works, I now have the user exposed to the upstream server and not proxied.
questions: 1) How can I make it not show the upstream server 2) Is there a way to not return a 301 to the client and only the redirected 200 stuff?
I've tried just about everything I can think of other than maybe doing some regex to send the proxy request directly to the /#! route
I found a solution to this:
location / {
proxy_pass mybucket.amazonaws.com;
proxy_intercept_errors on;
error_page 301 =200 #hide-301;
}
location #hide-301 {
proxy_pass mybucket.amazonaws.com;
}
I want to fetch some info but when I try to implement this to server (Ubuntu 18.04) with Nginx I can't fetch...
Put certificate to enable HTTPS to my domain.
Create a .env with a variable that contains the complete url to API (Because Im using a proxy in development)
Put some headers to the petition
Try to change the config in nginx
But nothing... my application only works running in localhost
axios.get(process.env.REACT_APP_API_URL) ...
The console of the browser (Safari):
Origin https://mysubdomain.com is not allowed by Access-Control-Allow-Origin.
XMLHttpRequest cannot load https://mysubdomain.com due to access control checks.
Failed to load resource: Origin https://mysubdomain.com is not allowed by Access-Control-Allow-Origin.
You Server needs to return below header value
Access-Control-Allow-Origin: *
which means anyone can connect to API.
Work Around
Go to chrome folder.
chrome.exe --user-data-dir="<Some directory name to store temporary chrome data>" --disable-web-security
I'm not expert in nginx but this works!
I edit my site file in /etc/nginx/sites-available/mysite like this:
location /anyAppLocation/ {
proxy_method GET;
proxy_pass_request_headers on;
proxy_pass https://api.site.com;
proxy_redirect default;
}
I am running a Django based web application inside a set of Docker containers and I'm trying to include both a REST API (using django-REST-framework) as well as the ReactJS app that consumes it. All my other apps are served over HTTPS but I am running into Mixed Active Content when it comes to the React app hitting the REST API inside the Docker network. The React App is being hosted within my NGINX container and served up as a static site.
Here's the relevant config for my Nginx container:
# SSL Website
server {
listen 443 http2 ssl;
listen [::]:443 http2 ssl;
server_name *.domain.com;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers EECDH+CHACHA20:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
ssl_prefer_server_ciphers on;
ssl_certificate /etc/nginx/ssl/my_cert.crt;
ssl_certificate_key /etc/nginx/ssl/my_key.key;
ssl_stapling on;
ssl_stapling_verify on;
access_log /home/logs/error.log;
error_log /home/logs/access.log;
upstream django {
server web:9000;
}
location /
{
include uwsgi_params;
# Proxy settings
proxy_pass http://django;
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
# REACT APPLICATION
location /faqs {
autoindex on;
sendfile on;
alias /usr/share/nginx/html/faqs;
}
}
The during development the React app was hitting my REST API from outside the network so resources calls used https like so:
axios.get(https://myapp.domain.com/api/)
and everything went relatively smoothly, barring the occasional CORS error.
However, now that both the React and the API are running inside the Docker network NGINX is not involved in the communication between containers and the routes are like so:
axios.get(http://web:9000/api)
This gives me the aggravating Mixed Active Content Error.
I've seen multiple questions similar to this but most are either not using Docker containers or use some NGINX directives I've already got in my config file. Given the popularity of Docker for these kind of loosely coupled applications I would imagine solutions abound for this kind of problem. Sadly I have not managed to come across any and as such, any suggestions would be greatly appreciated.
Since your application includes both an API and a web client from the same end point, you have a "gateway" in nginx that routes all requests to either end point. So far, common practice (although you are missing a load balancer, but that's a different discussion)
All requests to your API should be to https. You should also be serving your static site over https with the same certificate from the same domain. If this isn't the case - there is your problem.
Furthermore, all routes and urls inside your react application should be relative. That means that the react app doesn't need to know what your domain is. Neither should your API ideally although that is sometimes harder to do.
your axios call, given that the react app is served from the same domain over https, should be
axios.get(/api)
I'm getting a 404 for the root of my mobile site. My browser detection code looks for a mobile user-aget, sets the vary header, and 301s to the mobile site.
Here is the main site config
server {
listen 80;
server_name www.mydomain.com;
location / {
if ( $is_mobile) {
add_header Vary "User-Agent";
return 301 $scheme://m.mydomain.com$request_uri;
}
}
Here is the mobile site config
server {
listen 80;
server_name m.mydomain.com;
root /var/www/mobile;
index index.html;
location / {
try_files $uri $uri/ #dynamic;
}
location #dynamic {
rewrite ^/(.*)$ /index.html last;
}
}
I'm using the FireFox Override User Agent extension to test. If I go to www.mydomain.com the app loads properly. However, when I switch to a mobile browser Nginx 404s.
Nginx 200s for pages entered manually -
http://m.mydomain.com/index.html
http://m.mydomain.com/about.html
http://m.mydomain.com/pricing.html
Since both index and root are set shouldn't the site point http://m.mydomain.com/ to http://m.mydomain.com/index.html?
If not what is the best standardized approach to get this working?
UPDATE: Added config for mobile detection
Here is the config I use in the main nginx.conf file for mobile detection
map $http_user_agent $is_desktop {
default 0;
~*linux.*android|windows\s+(?:ce|phone) 0; # exceptions to the rule
~*spider|crawl|slurp|bot 1; # bots
~*windows|linux|os\s+x\s*[\d\._]+|solaris|bsd 1; # OSes
}
map $is_desktop $is_mobile {
1 0;
0 1;
}
See nothing wrong. I tried your config file in my devbox and it worked: the request was redirected to m.mydomain.com and the index.html was served.
So it could be something else caused the issue. How did you set $is_mobile? Maybe setting $is_mobile has some side effect and that if block in your question is not picked by nginx to serve the mobile request. Your mobile request went into another location block that doesn't know how to handle it.
In sites-available there was a file named 1 which was symlinked to site-enabled. Not sure how it got there but I unlinked and deleted it, restarted Nginx, and / now resolves to index.html properly. Very odd.