I uploaded my SSL certificate to GAE. The form is not displaying one of the sub-domains "images.floridata.com" that is in the certificate. The other sub-domains that are mapped to ghs.googlehosted.com are present as checkboxes that can be clicked to active the SSL for that sub-domains. But images.floridata.com which is mapped to c.storage.googleapis.com is not.
We use Google's Cloud DNS. Can someone tell me how to enable SSL for this subdomain?
If I don't enable SSL on this subdomain will the user get "mixed content" errors?
My site is a Golang app so in my app.yaml file I have a "secure: always" entry - would this prevent images being delivered via http causing "mixed content" errors.
thanks!
The "c.storage.googleapis.com" DNS redirect feature does not work for HTTPS addresses. It's HTTP-only.
In order to handle custom domains via HTTPS, you'll need to set up Google Cloud Load Balancing, register your SSL certificate with it, and then configure it to be backed by a GCS bucket.
I fix this problem using Proxy on Nginx, Apache or Similar
In my case after 2 weeks testing Firebase and Load Balance I found that solution and work fine to me using HTTPS of my own domain.
https://github.com/presslabs/gs-proxy/blob/master/nginx.conf
Or you can proxy an subfolder using this soltion
upstream gs {
server storage.googleapis.com:443;
keepalive 128;
}
server {
## YOUR CURRENT CONFIG ##
location ~ /cdn/(.*)$ {
proxy_set_header Host storage.googleapis.com;
proxy_pass https://gs/BUCKETNAME/subpath/$1;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_intercept_errors on;
proxy_hide_header alt-svc;
proxy_hide_header X-GUploader-UploadID;
proxy_hide_header alternate-protocol;
proxy_hide_header x-goog-hash;
proxy_hide_header x-goog-generation;
proxy_hide_header x-goog-metageneration;
proxy_hide_header x-goog-stored-content-encoding;
proxy_hide_header x-goog-stored-content-length;
proxy_hide_header x-goog-storage-class;
proxy_hide_header x-xss-protection;
proxy_hide_header accept-ranges;
proxy_hide_header Set-Cookie;
proxy_ignore_headers Set-Cookie;
}
# location / { ... #
}
Depending of your need you must activate Access-Control-Allow-Origin in Cloud Storage.
Proxy is cheap then Load Balance and if you need to SEO is a good choice.
Related
I'm currently developing an application which consists of a React frontend, which makes frequent requests to a Django backend. Both the React and Django applications are running on the same server.
My problem is I wish to hide my Django backend from the world, so it only accepts requests from my React application. To do so, I've been trying several configurations of ALLOWED_HOSTS in my Django settings.py, but so far none of them seem to be successful. An example route that I wish to hide is the following:
https://api.jobot.es/auth/user/1
At first I tried the following configuration:
ALLOWED_HOSTS=['jobot.es']
but while this hid the Django backend from the world, it also blocked the petitions coming from the React app (at jobot.es). Changing the configuration to:
ALLOWED_HOSTS=['127.0.0.1']
enabled my React app to access the backend but so could do the rest of the world. When the Django backend is inaccessible from the outside world, a get request from https://api.jobot.es/auth/user/1 should return a 400 "Bad Request" status.
The error I get when the React app fails to request data from the Django backend is the following:
Access to XMLHttpRequest at 'https://api.jobot.es/auth/login' from origin 'https://jobot.es' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource., but in settings.py I have allowed all Cors origins with CORS_ORIGIN_ALLOW_ALL = True.
The url of my React application is https://jobot.es, while the url for the Django backend is https://api.jobot.es, but as both apps are hosted on the same server both urls resolve to the same ip address. On the server I'm using Nginx to redirect traffic accordingly to either the React app or the Django backend.
In case it is of any help, here are the Nginx configurations for the React app (first) and the Django backend (second):
React app Nginx configuration
server {
server_name jobot.es www.jobot.es;
access_log off;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/jobot.es/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/jobot.es/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = jobot.es) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name jobot.es;
listen 80;
return 404; # managed by Certbot
}
Django backend Nginx configuration:
server {
server_name api.jobot.es;
access_log off;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/jobot.es/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/jobot.es/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = api.jobot.es) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name api.jobot.es;
listen 80;
return 404; # managed by Certbot
}
I also attach the GitHub repositories for both the React App and the Django backend in hopes that they may be of any help.
React App:
https://github.com/PaburoTC/jobot
DJango Backend:
https://github.com/PaburoTC/JoboBackend
Thank you in advance <3
You can't "hide" the Django application, since the React app, which would be contacting the Django backend, is running in users' browsers (i.e. in the outside world).
In other words, there is no separate "React application" connecting to your Django API backend, it's just the user's browser first requesting jobot.es, then api.jobot.es.
You could check for the referer header, but it has no real security benefit at all.
We are making a V2 Docusaurus website.
After building the website in the server, we could well use it with https. Here is a part of my_server_block.conf:
server {
listen 3001 ssl;
ssl_certificate /certs/server.crt;
ssl_certificate_key /certs/server.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://localhost:3002;
proxy_redirect off;
proxy_set_header Host $host:$server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Ssl on;
}
}
In localhost, http works. However, we need to test https in localhost now. But https returns an error, though I started it by HTTPS=true yarn start: This site can’t provide a secure connection localhost sent an invalid response. ERR_SSL_PROTOCOL_ERROR:
Does anyone know what I should do to make https work in localhost?
Edit 1: I tried HTTPs=true SSL_CRT_FILE=certs/server.crt SSL_KEY_FILE=certs/server.key yarn start, https://localhost:3001 still returned the same error. Note that certs/server.crt and certs/server.crt are the files that make https work in our production server via ngnix:
server {
listen 3001 ssl;
ssl_certificate /certs/server.crt;
ssl_certificate_key /certs/server.key;
You are using Nginx, so use it for SSL offloading (your current config) and don't start https on the Docusaurus site. So user in the browser will use https, but Docusaurus will be using http.
If you start https on the Docusaurus site and you will be proxypassing with http proxy_pass http://localhost:3002;, then it is obvious problem - connection with http protocol to https endpoint. You may proxypass with https protocol proxy_pass https://localhost:3002; of course, but that may need more advance configuration. Just keep it simple and use SSL offloading in the Nginx.
There is an issue with https support on localhost in react-dev-utils#^v9.0.3, which is a dependency of docusaurus.
https://github.com/facebook/create-react-app/issues/8075
https://github.com/facebook/create-react-app/pull/8079
It is fixed in react-dev-utils#10.1.0
Docusaurus 2 uses Create React App's utils internally and you might need to specify the path to your cert and key as per the instructions here. I'm not familiar with the server config so I can't help you there.
Maybe this answer will be helpful - How can I provide a SSL certificate with create-react-app?
I may be twisting things about horribly, but... I was given a ReactJS application that has to be served out to multiple sub-domains, so
a.foo.bar
b.foo.bar
c.foo.bar
...
Each of these should point to a different instance of the application, but I don't want to run npm start for each one - that would be a crazy amount of server resources.
So I went to host these on S3. I have a bucket foo.bar and then directories under that for a b c... and set that bucket up to serve static web sites. So far so good - if I go to https://s3.amazonaws.com/foo.bar/a/ I will get the index page. However most things tend to break from there as there are non-relative links to things like /css/ or /somepath - those break because they aren't smart enough to realize they're being served from /foo.bar/a/. Plus we want a domain slapped on this anyway.
So now I need to map a.foo.bar -> https://s3.amazonaws.com/foo.bar/a/. We aren't hosting our domain with AWS, so I'm not sure if it's possible to front this with CloudFront or similar. Open to a solution along those lines, but I couldn't find it.
Instead, I stood up a simple nginx proxy. I also added in forcing to https and some other things while I had the proxy, something of the form:
server {
listen 443;
server_name foo.bar;
ssl on;
ssl_certificate /etc/pki/tls/certs/server.crt;
ssl_certificate_key /etc/pki/tls/certs/server.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
# Redirect (*).foo.bar to (s3bucket)/(*)
location / {
index index.html index.htm;
set $legit "0";
set $index "";
# First off, we lose the index document functionality of S3 when we
# proxy requests. So we need to add that back on to our rewrites if
# needed. This is a little dangerous, probably should find a better
# way if one exists.
if ($uri ~* "\.foo\.bar$") {
set $index "/index.html";
}
if ($uri ~* "\/$") {
set $index "index.html";
}
# If we're making a request to foo.bar (not a sub-host),
# make the request directly to "production"
if ($host ~* "^foo\.bar") {
set $legit "1";
rewrite /(.*) /foo.bar/production/$1$index break;
}
# Otherwise, take the sub-host from the request and use that for the
# redirect path
if ($host ~* "^(.*?)\.foo\.bar") {
set $legit "1";
set $subhost $1;
rewrite /(.*) /foo.bar/$subhost/$1$index break;
}
# Anything else, give them foo.bar
if ($legit = "0") {
return 302 https://foo.bar;
}
# Peform the actual proxy forward
proxy_pass https://s3.amazonaws.com/;
proxy_set_header Host s3.amazonaws.com;
proxy_set_header Referer https://s3.amazonaws.com;
proxy_set_header User-Agent $http_user_agent;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Accept-Encoding "";
proxy_set_header Accept-Language $http_accept_language;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
sub_filter google.com example.com;
sub_filter_once off;
}
}
This works - I go to a.foo.bar, and I get the index page I expect, and clicking around works. However, part of the application also does an OAuth style login, and expects the browser to be redirected back to the page at /reentry?token=foo... The problem is that path only exists as a route in the React app, and that app isn't loaded by a static web server like S3, so you just get a 404 (or 403 because I don't have an error page defined or forwarded yet).
So.... All that for the question...
Can I serve a ReactJS application from a dumb/static server like S3, and have it understand callbacks to it's routes? Keep in mind that the index/error directives in S3 seem to be discarded when fronted with a proxy the way I have above.
OK, there was a lot in my original question, but the core of it really came down to: as a non-UI person, how do I make an OAuth workflow work with a React app? The callback URL in this case is a route, which doesn't exist if you unload the index.html page. If you're going directly against S3, this is solved by directing all errors to index.html, which reloads the routes and the callback works.
When fronted by nginx however, we lose this error->index.html routing. Fortunately, it's a pretty simple thing to add back:
location / {
proxy_intercept_errors on;
error_page 400 403 404 500 =200 /index.html;
Probably don't need all of those status codes - for S3, the big thing is the 403. When you request a page that doesn't exist, it will treat it as though you're trying to browse the bucket, and give you back a 403 forbidden rather than a 404 not found or something like that. So in this case a response from S3 that results in a 403 will get redirected to /index.html, which will recall the routes loaded there and the callback to /callback?token=... will work.
You can use Route53 to buy domain names and then point them toward your S3 bucket and you can do this with as many domains as you like.
You don't strictly speaking need to touch CloudFront but it's recommended as it is a CDN solution which is better for the user experience.
When deploying applications to S3, all you need to keep in mind is that the code you deploy to it is going to run 100% on your user's browser. So no server stuff.
I am running a Django based web application inside a set of Docker containers and I'm trying to include both a REST API (using django-REST-framework) as well as the ReactJS app that consumes it. All my other apps are served over HTTPS but I am running into Mixed Active Content when it comes to the React app hitting the REST API inside the Docker network. The React App is being hosted within my NGINX container and served up as a static site.
Here's the relevant config for my Nginx container:
# SSL Website
server {
listen 443 http2 ssl;
listen [::]:443 http2 ssl;
server_name *.domain.com;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers EECDH+CHACHA20:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
ssl_prefer_server_ciphers on;
ssl_certificate /etc/nginx/ssl/my_cert.crt;
ssl_certificate_key /etc/nginx/ssl/my_key.key;
ssl_stapling on;
ssl_stapling_verify on;
access_log /home/logs/error.log;
error_log /home/logs/access.log;
upstream django {
server web:9000;
}
location /
{
include uwsgi_params;
# Proxy settings
proxy_pass http://django;
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
# REACT APPLICATION
location /faqs {
autoindex on;
sendfile on;
alias /usr/share/nginx/html/faqs;
}
}
The during development the React app was hitting my REST API from outside the network so resources calls used https like so:
axios.get(https://myapp.domain.com/api/)
and everything went relatively smoothly, barring the occasional CORS error.
However, now that both the React and the API are running inside the Docker network NGINX is not involved in the communication between containers and the routes are like so:
axios.get(http://web:9000/api)
This gives me the aggravating Mixed Active Content Error.
I've seen multiple questions similar to this but most are either not using Docker containers or use some NGINX directives I've already got in my config file. Given the popularity of Docker for these kind of loosely coupled applications I would imagine solutions abound for this kind of problem. Sadly I have not managed to come across any and as such, any suggestions would be greatly appreciated.
Since your application includes both an API and a web client from the same end point, you have a "gateway" in nginx that routes all requests to either end point. So far, common practice (although you are missing a load balancer, but that's a different discussion)
All requests to your API should be to https. You should also be serving your static site over https with the same certificate from the same domain. If this isn't the case - there is your problem.
Furthermore, all routes and urls inside your react application should be relative. That means that the react app doesn't need to know what your domain is. Neither should your API ideally although that is sometimes harder to do.
your axios call, given that the react app is served from the same domain over https, should be
axios.get(/api)
I have an Angular app that consists of a single index.html file. The only others files are main.css and some image assets. I've already rediscovered there is no way to use S3 web hosting to serve it so I'm trying to set up nginx as a proxy. I have done this before but it was years ago and it wasn't with an Angular app and HTML5 push state. Here is the current nginx config server block I have.
server {
server_name foo.com;
set $s3_bucket 'foo.com.s3.amazonaws.com';
proxy_http_version 1.1;
proxy_set_header Host $s3_bucket;
proxy_set_header Authorization '';
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header Set-Cookie;
proxy_ignore_headers "Set-Cookie";
proxy_buffering off;
proxy_intercept_errors on;
resolver 172.16.0.23 valid=300s;
resolver_timeout 10s;
location ~* ^/(assets|styles)/(.*) {
set $url_full '$1/$2';
proxy_pass http://$s3_bucket/live/$url_full;
}
location / {
rewrite ^ /live/index.html break;
proxy_pass http://$s3_bucket;
}
}
I don't assume anything with this config. It could all be completely wrong.
It does "work". I can go to foo.com and the site serves and I can navigate and it all work wonders. But it won't load any URL that is not /. All other redirect to / and that is a problem.
What am I doing wrong? All help appreciated.