Running Docusaurus with HTTPS=true yields ERR_SSL_PROTOCOL_ERROR - reactjs

We are making a V2 Docusaurus website.
After building the website in the server, we could well use it with https. Here is a part of my_server_block.conf:
server {
listen 3001 ssl;
ssl_certificate /certs/server.crt;
ssl_certificate_key /certs/server.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://localhost:3002;
proxy_redirect off;
proxy_set_header Host $host:$server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Ssl on;
}
}
In localhost, http works. However, we need to test https in localhost now. But https returns an error, though I started it by HTTPS=true yarn start: This site can’t provide a secure connection localhost sent an invalid response. ERR_SSL_PROTOCOL_ERROR:
Does anyone know what I should do to make https work in localhost?
Edit 1: I tried HTTPs=true SSL_CRT_FILE=certs/server.crt SSL_KEY_FILE=certs/server.key yarn start, https://localhost:3001 still returned the same error. Note that certs/server.crt and certs/server.crt are the files that make https work in our production server via ngnix:
server {
listen 3001 ssl;
ssl_certificate /certs/server.crt;
ssl_certificate_key /certs/server.key;

You are using Nginx, so use it for SSL offloading (your current config) and don't start https on the Docusaurus site. So user in the browser will use https, but Docusaurus will be using http.
If you start https on the Docusaurus site and you will be proxypassing with http proxy_pass http://localhost:3002;, then it is obvious problem - connection with http protocol to https endpoint. You may proxypass with https protocol proxy_pass https://localhost:3002; of course, but that may need more advance configuration. Just keep it simple and use SSL offloading in the Nginx.

There is an issue with https support on localhost in react-dev-utils#^v9.0.3, which is a dependency of docusaurus.
https://github.com/facebook/create-react-app/issues/8075
https://github.com/facebook/create-react-app/pull/8079
It is fixed in react-dev-utils#10.1.0

Docusaurus 2 uses Create React App's utils internally and you might need to specify the path to your cert and key as per the instructions here. I'm not familiar with the server config so I can't help you there.
Maybe this answer will be helpful - How can I provide a SSL certificate with create-react-app?

Related

NextJS 500 internal server error on deployed website. But build works PERFECTLY on local

Build is working perfectly on my local PC with pm2, no errors at all. Every page loads perfectly, there are no 404 or 500 errors in fetching files. It's great! This is EXACTLY how I want it to run.
But when I try and deploy this on Ubuntu with pm2 I am getting two sets of errors:
I'll put screenshots here:
https://i.imgur.com/IdnEH7r.png
Written form:
Script_app-a44cfb7405f734c3.js
Script_buildManifest.js
Script_ssgManifest.js
Script_middlewareManifest.js
(And others) all are giving me a 500 Internal Server Error no matter what I do.
Attempted Solutions
I've tried many approaches and all of them end with this error/failure when I am navigating to my deployed website.
Upload manually with filezilla.
Git clone from my repository, build on the server (no build errors) and then deploy with pm2. No errors with pm2 either! But then I am given 404/500 errors.
I've tried this in different folders, I've tried it with a host of different commands. I am completely out of ideas and I've uploaded and tried to get my files on there and install packages and more.
Nginx error?
This might be nginx error? But the nginx settings work perfectly fine for a brand new "npx create-next-app#latest" Following this exact tutorial to the letter: https://www.youtube.com/watch?v=x6ci2iCckWc&t=658s&ab_channel=DigitalCEO
My nginx file
"server {
server_name specialservername.com;
gzip on;
gzip_proxied any;
gzip_types application/javascript application/x-javascript text/css text/javascript;
gzip_comp_level 5;
gzip_buffers 16 8k;
gzip_min_length 256;
location /_next/static/ {
alias /var/www/frontend/.next/static/;
expires 365d;
access_log off;
}
#EDITS
location ~ ^/_next/static/(.*)$ {
root /.next;
try_files "/static/$1" "/server/static/o$1" #proxy_pass;
}
#END EDITS
location / {
proxy_pass http://127.0.0.1:3000; #change to 3001 for second app, but make sure second nextjs app starts on new port in packages.json "start": "next start -p 3001",
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
add_header Access-Control-Allow-Origin *;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/specialservername.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/specialservername.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host =specialservername.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name specialservername.com;
return 404; # managed by Certbot
}"
What I was Expecting
The NextJS build to be deployed on this server no different than it is on my local machine. On my local machine it's BEAUTIFUL!
If you're seeing an "Internal Server Error" when trying to access your Next.js application on Ubuntu with nginx, it's likely that there's an issue with your configuration.
Here are a few things you can try:
Check your nginx error logs: Look in your nginx error logs (typically located in /var/log/nginx/error.log) for any error messages that might indicate what's causing the issue.
Check your Next.js logs: You should also check your Next.js logs (usually located in the .next directory of your application) for any error messages that might indicate what's causing the issue.
Check your Next.js configuration: Make sure your Next.js configuration is set up correctly for production deployment. You should make sure that your next.config.js file has the necessary settings for production deployment, such as setting target: 'server', configuring your build options, and setting your asset prefix if necessary.
Check your environment variables: Make sure any environment variables that your application depends on are set correctly on your Ubuntu server.
Check for permission: Make sure file, build files on server has enough permissions.
Also if everything from above works fine than try dockerizing your application with nginx and run on local then simply mimic the same on server(ubuntu) that would definatly give you some clue.
and lastly, don't panic. 😃

Nginx causes static build of React app to break

I'm trying to serve a static build of a ReactJS app using Nginx, but something really strange is happening: the stylesheet isn't getting applied and the image isn't loading. I can see in the developer tools that the resources are there (see the image below), they just aren't getting applied. However, the javascript file is running--otherwise there wouldn't be any content on the screen.
What makes this even weirder is that I tried serving the files in the same directory using a python http server (command: python3 -m http.server 80), and it was fine; all of the assets loaded correctly.
Since it seems to be an nginx issue, here's my nginx config:
nginx.conf
events {
worker_connections 1024;
}
http {
resolver 127.0.0.11;
# Http redirect to https (unless it's a challenge)
server {
listen 80;
listen [::]:80;
server_name ambitx.io www.ambitx.io wc.ambitx.io rk.ambitx.io;
server_tokens off;
include letsencrypt.conf;
location / {
return 301 https://$server_name$request_uri;
}
}
# React frontend
server {
listen 443 default_server ssl http2;
listen [::]:443 ssl http2;
server_name ambitx.io www.ambitx.io;
include ssl.conf;
include letsencrypt.conf;
location / {
root /var/www/staticfiles;
index index.html index.htm;
try_files $uri /index.html =404;
}
}
# Websocket backend
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name wc.ambitx.io;
include ssl.conf;
include letsencrypt.conf;
location / {
proxy_pass "http://wsserver:8080";
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
}
}
# Rocket backend
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name rk.ambitx.io;
include ssl.conf;
include letsencrypt.conf;
location / {
proxy_pass "http://rocketserver:80";
}
}
}
letsencrypt.conf
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
ssl.conf
ssl_certificate /etc/letsencrypt/live/ambitx.io/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/ambitx.io/privkey.pem;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # don't use SSLv3. Ref: POODLE
ssl_prefer_server_ciphers on;
Thanks in advance for the help.
I figured it out: it turns out the Nginx server was missing it's MIME types (the browser thought that the css file was text/plain instead of text/css).
Usually the best practice is to add files to /etc/nginx/conf.d/ (and mount your docker volume there) instead of editting nginx.conf directly, but I wanted to be able to place other files in the /etc/nginx/ directory so I decided to mount my docker volume there.
As it turns out, that's a bad idea. I overwrote a lot of other important config files inside the docker container. Now, I could just copy all of those files into my docker volume and call it good, but I decided it would be worth doing it the "right" way so I don't mess up stuff in the future.
So, now I have a docker volume mounted at /etc/nginx/cond.f/ and another volume mounted at /etc/nginx/lib/ so that I can import files without the main nginx.conf reading is as a server config.

Nginx Load-Balancer Does Not Load React Site

I have the following website which is a React built site. I have an nginx load-balance site with two backend servers. The individual servers work perfectly but when behind the load-balancers the site rarely load and looking at the browser dev tools there are a ton of 404 Not Found errors:
https://junoscan.skynetexplorers.com
I don't understand why the sites does not load. Sometimes a browser will start working properly. For example, currently Brave Browser does not work on my desktop but started working on my cell phone. What is happening? How do I fix this behavior?
##
# Set Rate Limiting (DDoS protection)
##
limit_req_zone $binary_remote_addr zone=req_zone:10m rate=5r/s;
# This is the internal server behind the proxy
upstream bdipper_node {
least_conn;
server cluster.provider-0.prod.sjc1.akash.pub:31375;
server cluster.provider-2.prod.ewr1.akash.pub:31639;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# This is the public facing listening server AND configures SSL for the website
server {
root /_next/static/chunks;
sendfile on;
tcp_nopush on;
sendfile_max_chunk 1m;
tcp_nodelay on;
keepalive_timeout 65;
listen 443 ssl;
location / {
limit_req zone=req_zone burst=20 nodelay;
proxy_pass http://bdipper_node/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host:443;
}
}
# This redirect http to https
server {
listen 80 ;
return 301 https://$host$request_uri;
}

Only allow a certain domain to access my Django backend

I'm currently developing an application which consists of a React frontend, which makes frequent requests to a Django backend. Both the React and Django applications are running on the same server.
My problem is I wish to hide my Django backend from the world, so it only accepts requests from my React application. To do so, I've been trying several configurations of ALLOWED_HOSTS in my Django settings.py, but so far none of them seem to be successful. An example route that I wish to hide is the following:
https://api.jobot.es/auth/user/1
At first I tried the following configuration:
ALLOWED_HOSTS=['jobot.es']
but while this hid the Django backend from the world, it also blocked the petitions coming from the React app (at jobot.es). Changing the configuration to:
ALLOWED_HOSTS=['127.0.0.1']
enabled my React app to access the backend but so could do the rest of the world. When the Django backend is inaccessible from the outside world, a get request from https://api.jobot.es/auth/user/1 should return a 400 "Bad Request" status.
The error I get when the React app fails to request data from the Django backend is the following:
Access to XMLHttpRequest at 'https://api.jobot.es/auth/login' from origin 'https://jobot.es' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource., but in settings.py I have allowed all Cors origins with CORS_ORIGIN_ALLOW_ALL = True.
The url of my React application is https://jobot.es, while the url for the Django backend is https://api.jobot.es, but as both apps are hosted on the same server both urls resolve to the same ip address. On the server I'm using Nginx to redirect traffic accordingly to either the React app or the Django backend.
In case it is of any help, here are the Nginx configurations for the React app (first) and the Django backend (second):
React app Nginx configuration
server {
server_name jobot.es www.jobot.es;
access_log off;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/jobot.es/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/jobot.es/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = jobot.es) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name jobot.es;
listen 80;
return 404; # managed by Certbot
}
Django backend Nginx configuration:
server {
server_name api.jobot.es;
access_log off;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/jobot.es/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/jobot.es/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = api.jobot.es) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name api.jobot.es;
listen 80;
return 404; # managed by Certbot
}
I also attach the GitHub repositories for both the React App and the Django backend in hopes that they may be of any help.
React App:
https://github.com/PaburoTC/jobot
DJango Backend:
https://github.com/PaburoTC/JoboBackend
Thank you in advance <3
You can't "hide" the Django application, since the React app, which would be contacting the Django backend, is running in users' browsers (i.e. in the outside world).
In other words, there is no separate "React application" connecting to your Django API backend, it's just the user's browser first requesting jobot.es, then api.jobot.es.
You could check for the referer header, but it has no real security benefit at all.

How To Avoid Mixed Content with Docker Apps

I am running a Django based web application inside a set of Docker containers and I'm trying to include both a REST API (using django-REST-framework) as well as the ReactJS app that consumes it. All my other apps are served over HTTPS but I am running into Mixed Active Content when it comes to the React app hitting the REST API inside the Docker network. The React App is being hosted within my NGINX container and served up as a static site.
Here's the relevant config for my Nginx container:
# SSL Website
server {
listen 443 http2 ssl;
listen [::]:443 http2 ssl;
server_name *.domain.com;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers EECDH+CHACHA20:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
ssl_prefer_server_ciphers on;
ssl_certificate /etc/nginx/ssl/my_cert.crt;
ssl_certificate_key /etc/nginx/ssl/my_key.key;
ssl_stapling on;
ssl_stapling_verify on;
access_log /home/logs/error.log;
error_log /home/logs/access.log;
upstream django {
server web:9000;
}
location /
{
include uwsgi_params;
# Proxy settings
proxy_pass http://django;
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
# REACT APPLICATION
location /faqs {
autoindex on;
sendfile on;
alias /usr/share/nginx/html/faqs;
}
}
The during development the React app was hitting my REST API from outside the network so resources calls used https like so:
axios.get(https://myapp.domain.com/api/)
and everything went relatively smoothly, barring the occasional CORS error.
However, now that both the React and the API are running inside the Docker network NGINX is not involved in the communication between containers and the routes are like so:
axios.get(http://web:9000/api)
This gives me the aggravating Mixed Active Content Error.
I've seen multiple questions similar to this but most are either not using Docker containers or use some NGINX directives I've already got in my config file. Given the popularity of Docker for these kind of loosely coupled applications I would imagine solutions abound for this kind of problem. Sadly I have not managed to come across any and as such, any suggestions would be greatly appreciated.
Since your application includes both an API and a web client from the same end point, you have a "gateway" in nginx that routes all requests to either end point. So far, common practice (although you are missing a load balancer, but that's a different discussion)
All requests to your API should be to https. You should also be serving your static site over https with the same certificate from the same domain. If this isn't the case - there is your problem.
Furthermore, all routes and urls inside your react application should be relative. That means that the react app doesn't need to know what your domain is. Neither should your API ideally although that is sometimes harder to do.
your axios call, given that the react app is served from the same domain over https, should be
axios.get(/api)

Resources