I am running a Django based web application inside a set of Docker containers and I'm trying to include both a REST API (using django-REST-framework) as well as the ReactJS app that consumes it. All my other apps are served over HTTPS but I am running into Mixed Active Content when it comes to the React app hitting the REST API inside the Docker network. The React App is being hosted within my NGINX container and served up as a static site.
Here's the relevant config for my Nginx container:
# SSL Website
server {
listen 443 http2 ssl;
listen [::]:443 http2 ssl;
server_name *.domain.com;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers EECDH+CHACHA20:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
ssl_prefer_server_ciphers on;
ssl_certificate /etc/nginx/ssl/my_cert.crt;
ssl_certificate_key /etc/nginx/ssl/my_key.key;
ssl_stapling on;
ssl_stapling_verify on;
access_log /home/logs/error.log;
error_log /home/logs/access.log;
upstream django {
server web:9000;
}
location /
{
include uwsgi_params;
# Proxy settings
proxy_pass http://django;
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
# REACT APPLICATION
location /faqs {
autoindex on;
sendfile on;
alias /usr/share/nginx/html/faqs;
}
}
The during development the React app was hitting my REST API from outside the network so resources calls used https like so:
axios.get(https://myapp.domain.com/api/)
and everything went relatively smoothly, barring the occasional CORS error.
However, now that both the React and the API are running inside the Docker network NGINX is not involved in the communication between containers and the routes are like so:
axios.get(http://web:9000/api)
This gives me the aggravating Mixed Active Content Error.
I've seen multiple questions similar to this but most are either not using Docker containers or use some NGINX directives I've already got in my config file. Given the popularity of Docker for these kind of loosely coupled applications I would imagine solutions abound for this kind of problem. Sadly I have not managed to come across any and as such, any suggestions would be greatly appreciated.
Since your application includes both an API and a web client from the same end point, you have a "gateway" in nginx that routes all requests to either end point. So far, common practice (although you are missing a load balancer, but that's a different discussion)
All requests to your API should be to https. You should also be serving your static site over https with the same certificate from the same domain. If this isn't the case - there is your problem.
Furthermore, all routes and urls inside your react application should be relative. That means that the react app doesn't need to know what your domain is. Neither should your API ideally although that is sometimes harder to do.
your axios call, given that the react app is served from the same domain over https, should be
axios.get(/api)
Related
I have docker swarm app with multiple services and have problems with one service which is a react app. My config looks like this:
location /web/ {
include /etc/nginx/proxy-options/proxy.conf;
set $webapp webapp;
proxy_cache_bypass $http_pragma;
proxy_pass http://$webapp$uri$is_args$args;
sub_filter 'action="/' 'action="/web/';
sub_filter 'href="/' 'href="/web/';
sub_filter 'src="/' 'src="/web/';
sub_filter_once off;
}
location /app-info/ {
include /etc/nginx/proxy-options/proxy.conf;
proxy_ssl_session_reuse off;
proxy_redirect off;
set $webapp webapp;
proxy_pass http://$webapp$uri/$is_args$args;
sub_filter 'action="/' 'action="/web/';
sub_filter 'href="/' 'href="/web/';
sub_filter 'src="/' 'src="/web/';
sub_filter_once off;
}
Webapp is the react application hosted in docker also served with nginx. One of it's pages is webapp/app-info.
When setup through reverse proxy when I try to access anything on webapp I just get white screen and in the console I can see these errors. At this moment I'm out of ideas what I'm doing wrong.
Refused to apply style from 'https://example.com/web/static/css/main.23d98db9.chunk.css' because its MIME type ('text/html') is not a supported stylesheet MIME type, and strict MIME checking is enabled.
Now what I'm trying to achieve (beside just making /web work) is making only /app-info/ accessible from outside. I would like to (temporary) block any access to webapp, beside the requests to /app-info. Is that possible?
Edit: Forgot to mention - everything works perfectly fine with
location / {
include /etc/nginx/proxy-options/proxy.conf;
set $webapp webapp;
proxy_pass http://$webapp$uri$is_args$args;
proxy_redirect http://$webapp/ $scheme://$http_host/;
}
I have the following website which is a React built site. I have an nginx load-balance site with two backend servers. The individual servers work perfectly but when behind the load-balancers the site rarely load and looking at the browser dev tools there are a ton of 404 Not Found errors:
https://junoscan.skynetexplorers.com
I don't understand why the sites does not load. Sometimes a browser will start working properly. For example, currently Brave Browser does not work on my desktop but started working on my cell phone. What is happening? How do I fix this behavior?
##
# Set Rate Limiting (DDoS protection)
##
limit_req_zone $binary_remote_addr zone=req_zone:10m rate=5r/s;
# This is the internal server behind the proxy
upstream bdipper_node {
least_conn;
server cluster.provider-0.prod.sjc1.akash.pub:31375;
server cluster.provider-2.prod.ewr1.akash.pub:31639;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# This is the public facing listening server AND configures SSL for the website
server {
root /_next/static/chunks;
sendfile on;
tcp_nopush on;
sendfile_max_chunk 1m;
tcp_nodelay on;
keepalive_timeout 65;
listen 443 ssl;
location / {
limit_req zone=req_zone burst=20 nodelay;
proxy_pass http://bdipper_node/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host:443;
}
}
# This redirect http to https
server {
listen 80 ;
return 301 https://$host$request_uri;
}
I have an application that has a React frontend and a Python Flask backend. The frontend communicates with the server to perform specific operations and the server api should only be used by the client.
I have deployed the whole application (Client and Server) to an Ubuntu virtual machine. The machine only has specific ports open to the public (5000, 443, 22). I have setup Nginx configuration and the frontend can be access from my browser via http://<ip:address>:5000. The server is running locally on a different port, 4000, which is not accessible to the public as designed.
The problem is when I access the client app and I navigate to the pages that communicate with the server via http://127.0.0.1:4000 from the react app, I get an error saying connection was refused.
GET http://127.0.0.1:4000/ net::ERR_CONNECTION_REFUSED on my browser.
When I ssh into the vm and run the same command through curl curl http://127.0.0.1:4000/, I get a response and everything works fine.
Is there a way I can deploy the server in the same vm such that when I access the client React App from my browser, the React App can access the server without problems?
So after tinkering with this, I found a solution using Nginx. Summary is you run the server locally and use a different port say 4000 (not exposed to public), then expose your react app on the exposed port in this case 5000.
Then use a proxy in your Nginx config that redirects any call starting with api to the local host server running. See config below
server {
#Exposed port and servername ipaddress
listen 5000;
server_name 1.2.3.4 mydomain.com;
#SSL
ssl on;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
ssl_protocols TLSv1.2;
#Link to the react build
root /var/www/html/build;
index index.html index.htm;
#Error and access logs for debugging
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
try_files $uri /index.html =404;
}
#Redirect any traffic beginning with /api to the flask server
location /api {
include proxy_params;
proxy_pass http://localhost:4000;
}
}
Now this means you need to have all your server endpoints begin with /api/... and the user can also access the endpoint from the browser via http://<ip:address>:5000/api/endpoint
You can mitigate this by having your client send a token the server and the server will not run any commands without that token/authorization.
I found the solution here and modified it to fit my specific need here
Part two of solution
Other series in the solution can be found Part one of solution and Part three of solution
We are making a V2 Docusaurus website.
After building the website in the server, we could well use it with https. Here is a part of my_server_block.conf:
server {
listen 3001 ssl;
ssl_certificate /certs/server.crt;
ssl_certificate_key /certs/server.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://localhost:3002;
proxy_redirect off;
proxy_set_header Host $host:$server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Ssl on;
}
}
In localhost, http works. However, we need to test https in localhost now. But https returns an error, though I started it by HTTPS=true yarn start: This site can’t provide a secure connection localhost sent an invalid response. ERR_SSL_PROTOCOL_ERROR:
Does anyone know what I should do to make https work in localhost?
Edit 1: I tried HTTPs=true SSL_CRT_FILE=certs/server.crt SSL_KEY_FILE=certs/server.key yarn start, https://localhost:3001 still returned the same error. Note that certs/server.crt and certs/server.crt are the files that make https work in our production server via ngnix:
server {
listen 3001 ssl;
ssl_certificate /certs/server.crt;
ssl_certificate_key /certs/server.key;
You are using Nginx, so use it for SSL offloading (your current config) and don't start https on the Docusaurus site. So user in the browser will use https, but Docusaurus will be using http.
If you start https on the Docusaurus site and you will be proxypassing with http proxy_pass http://localhost:3002;, then it is obvious problem - connection with http protocol to https endpoint. You may proxypass with https protocol proxy_pass https://localhost:3002; of course, but that may need more advance configuration. Just keep it simple and use SSL offloading in the Nginx.
There is an issue with https support on localhost in react-dev-utils#^v9.0.3, which is a dependency of docusaurus.
https://github.com/facebook/create-react-app/issues/8075
https://github.com/facebook/create-react-app/pull/8079
It is fixed in react-dev-utils#10.1.0
Docusaurus 2 uses Create React App's utils internally and you might need to specify the path to your cert and key as per the instructions here. I'm not familiar with the server config so I can't help you there.
Maybe this answer will be helpful - How can I provide a SSL certificate with create-react-app?
I may be twisting things about horribly, but... I was given a ReactJS application that has to be served out to multiple sub-domains, so
a.foo.bar
b.foo.bar
c.foo.bar
...
Each of these should point to a different instance of the application, but I don't want to run npm start for each one - that would be a crazy amount of server resources.
So I went to host these on S3. I have a bucket foo.bar and then directories under that for a b c... and set that bucket up to serve static web sites. So far so good - if I go to https://s3.amazonaws.com/foo.bar/a/ I will get the index page. However most things tend to break from there as there are non-relative links to things like /css/ or /somepath - those break because they aren't smart enough to realize they're being served from /foo.bar/a/. Plus we want a domain slapped on this anyway.
So now I need to map a.foo.bar -> https://s3.amazonaws.com/foo.bar/a/. We aren't hosting our domain with AWS, so I'm not sure if it's possible to front this with CloudFront or similar. Open to a solution along those lines, but I couldn't find it.
Instead, I stood up a simple nginx proxy. I also added in forcing to https and some other things while I had the proxy, something of the form:
server {
listen 443;
server_name foo.bar;
ssl on;
ssl_certificate /etc/pki/tls/certs/server.crt;
ssl_certificate_key /etc/pki/tls/certs/server.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
# Redirect (*).foo.bar to (s3bucket)/(*)
location / {
index index.html index.htm;
set $legit "0";
set $index "";
# First off, we lose the index document functionality of S3 when we
# proxy requests. So we need to add that back on to our rewrites if
# needed. This is a little dangerous, probably should find a better
# way if one exists.
if ($uri ~* "\.foo\.bar$") {
set $index "/index.html";
}
if ($uri ~* "\/$") {
set $index "index.html";
}
# If we're making a request to foo.bar (not a sub-host),
# make the request directly to "production"
if ($host ~* "^foo\.bar") {
set $legit "1";
rewrite /(.*) /foo.bar/production/$1$index break;
}
# Otherwise, take the sub-host from the request and use that for the
# redirect path
if ($host ~* "^(.*?)\.foo\.bar") {
set $legit "1";
set $subhost $1;
rewrite /(.*) /foo.bar/$subhost/$1$index break;
}
# Anything else, give them foo.bar
if ($legit = "0") {
return 302 https://foo.bar;
}
# Peform the actual proxy forward
proxy_pass https://s3.amazonaws.com/;
proxy_set_header Host s3.amazonaws.com;
proxy_set_header Referer https://s3.amazonaws.com;
proxy_set_header User-Agent $http_user_agent;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Accept-Encoding "";
proxy_set_header Accept-Language $http_accept_language;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
sub_filter google.com example.com;
sub_filter_once off;
}
}
This works - I go to a.foo.bar, and I get the index page I expect, and clicking around works. However, part of the application also does an OAuth style login, and expects the browser to be redirected back to the page at /reentry?token=foo... The problem is that path only exists as a route in the React app, and that app isn't loaded by a static web server like S3, so you just get a 404 (or 403 because I don't have an error page defined or forwarded yet).
So.... All that for the question...
Can I serve a ReactJS application from a dumb/static server like S3, and have it understand callbacks to it's routes? Keep in mind that the index/error directives in S3 seem to be discarded when fronted with a proxy the way I have above.
OK, there was a lot in my original question, but the core of it really came down to: as a non-UI person, how do I make an OAuth workflow work with a React app? The callback URL in this case is a route, which doesn't exist if you unload the index.html page. If you're going directly against S3, this is solved by directing all errors to index.html, which reloads the routes and the callback works.
When fronted by nginx however, we lose this error->index.html routing. Fortunately, it's a pretty simple thing to add back:
location / {
proxy_intercept_errors on;
error_page 400 403 404 500 =200 /index.html;
Probably don't need all of those status codes - for S3, the big thing is the 403. When you request a page that doesn't exist, it will treat it as though you're trying to browse the bucket, and give you back a 403 forbidden rather than a 404 not found or something like that. So in this case a response from S3 that results in a 403 will get redirected to /index.html, which will recall the routes loaded there and the callback to /callback?token=... will work.
You can use Route53 to buy domain names and then point them toward your S3 bucket and you can do this with as many domains as you like.
You don't strictly speaking need to touch CloudFront but it's recommended as it is a CDN solution which is better for the user experience.
When deploying applications to S3, all you need to keep in mind is that the code you deploy to it is going to run 100% on your user's browser. So no server stuff.