I'm trying to setup a cookieless domain in accordance with Google's instructions for best serving static files. I'd like to do this on a subdomain instead of a completely separate domain. Does this serve the purpose? Can I have cookies on my main domain, but have a cookieless subdomain of it that serves static files? Does it matter if the cookieless subdomain is on the same IP address or not (i.e. served from the same location vs. a CDN)?
Thanks.
The point of cookieless domain for static files is to prevent of sending and receiving the cookies of your site when you are getting the static files. You need to check if you get this behavior with your solution. You can use tools like httpwatch to see it.
EDIT: I found a link very useful about it.
http://www.ravelrumba.com/blog/static-cookieless-domain/
You need to add following header and hide Set-Cookies by fastcgi.
server{
listen 80;
server_name yourdomain.com;
location ~* \.(jpg|jpeg|gif|css|png|js|ico|svg|woff|ttf|eot)$ {
access_log off;
expires 30d;
add_header Pragma public;
add_header Cache-Control public;
fastcgi_hide_header Set-Cookie;
}
}
Related
I am currently running into a problem with my react application being served by an NGINX docker container. Here are the details:
My NGINX proxy to my API is working correctly, as I can call it using postman from an external machine. The problem is that I cannot call it from within my frontend. Whenever my frontend makes any request (POST, GET, OPTIONS, etc) into my API, NGINX makes it call 127.0.0.1:8000, which in turn makes the request fail because I am connecting from an external machine which isn't running anything on 127.0.0.1. Even when I set my react application to call the external IP that maps to the proxy, it ends up requesting 127.0.0.1 for some reason.
I don't know if this is an NGINX or a react problem, but I would appreciate any help. I have been trying to solve this issue for quite some time, and even made a previous post that helped me identify the problem correctly, but not the root cause of it.
Here are what my config files look like:
NGINX: (nginx-proxy.conf)
upstream api {
server backend:8000;
}
server {
listen 8080;
server_name 192.168.100.6;
add_header Access-Control-Allow-Origin *;
add_header Access-Control-Allow-Methods "GET, POST, OPTIONS, PUT,";
add_header Access-Control-Allow-Headers "DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range";
add_header Access-Control-Expose-Headers "Content-Length,Content-Range";
location /api/{
resolver 127.0.0.1;
proxy_set_header Host $host;
proxy_pass http://api;
}
# ignore cache frontend
location ~* (service-worker\.js)$ {
add_header 'Cache-Control' 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
expires off;
proxy_no_cache 1;
}
location / {
root /var/www/react-frontend;
try_files $uri $uri/ /index.html;
}
}
Screenshot to firefox networking tab
The image in the link above shows all of my website resources being loaded from 192.168.100.6, but the moment I call my API, the request address changes to 127.0.0.1:8000, despite having the react application call 192.168.100.6/api/token (which does work on postman).
So after a lot of troubleshooting, I have found the problem to my issue.
What actually happened is that all of my codebase was correct, and the proxy was indeed working as intended, but for some reason
docker-compose build
or even:
docker-compose build --no-cache
was not updating my code changes (it was still sending requests to the ip I was using in development).
The answer that I arrived to was to do:
docker volume prune "my-nginx-volume"
and then rebuilding through docker-compose.
I wanna use nginx to run my node.js application. I created a build of the application and inside my nginx.conf I set the root to point to the location of the build folder. This worked and my application ran successfully on nginx.
Now I'm wondering if I could serve dynamic content directly through nginx. Like how I would get the app running with npm start can I do something similar with nginx instead of using the build(static) files?
You need a reverse proxy.
In your application. Configure your server to run on an internal port. For example 3000.
Then configure nginx to proxy connections to your app. Here's a simple nginx configuration to do just that:
root /path/to/app/build;
# Handle static content
location ^~ /static {
try_files $uri $uri/ =404;
}
# Handle dynamic content
location / {
proxy_pass http://127.0.0.1:3000;
}
Or, if you prefer, you can invert the URL scheme to default to static files:
root /path/to/app/build;
# Handle dynamic content
location ^~ /api {
proxy_pass http://127.0.0.1:3000;
}
# Handle static content
location / {
try_files $uri $uri/ =404;
}
Why do something like this?
There are several reasons to use an nginx front-end instead of setting your server to serve directly on port 80.
Nginx can server static content much faster than Express.static or other node static server.
Nginx can act as a load-balancer when you want to scale your server.
Nginx has been battle-tested on the internet so most security issues has been fixed or is well known. In comparison, express or http.server are just libraries and you are the person responsible for your application's security.
Nginx is a bit faster at serving HTTPS compared to node. So you can develop a plain-old HTTP server in node and let nginx handle encryption.
I am trying to set up multiple React apps on the same server. The problem is that after I build the project, index.html from build/ is found, but the auxiliary files from build/static are not. Initially, with just one app, I had location static/ with an alias. However, with multiple projects and multiple static/ directories I can not do that. Basically I want that each app to has its own static folder. How do I solve my problem?
In the browser, the error looks like this:
GET http://my.servername/static/css/2.266e55a5.chunk.css net::ERR_ABORTED 404 (Not Found)
My current set up is like this:
server {
listen 80;
server_name my.servername;
root /data/adpop/;
location /possible-malware-domains-viewer/ {
alias /data/adpop/possible-malware-domains-viewer/build/;
try_files $uri /possible-malware-domains-viewer/index.html;
add_header Access-Control-Allow-Origin *;
autoindex on;
# Simple requests
if ($request_method ~* "(GET|POST)") {
add_header "Access-Control-Allow-Origin" *;
}
# Preflighted requests
if ($request_method = OPTIONS ) {
add_header "Access-Control-Allow-Origin" *;
add_header "Access-Control-Allow-Methods" "GET, POST, OPTIONS, HEAD";
add_header "Access-Control-Allow-Headers" "Authorization, Origin, X-Requested-With, Content-Type, Accept";
return 200;
}
}
location /punycode-domains-viewer/ {
alias /data/adpop/punycode-domains-viewer/build/;
try_files $uri /punycode-domains-viewer/index.html;
[...same settings as above...]
}
}
}
I tried combining answers from here, here or here, sorry if it looks messy or I have major mistakes. If what am I trying to achieve isn't really ok, please suggest something else. Thanks!
It's not very efficient or scalable to share the same URI namespace prefix across a number of projects and directories. It would be preferable to give each project a unique URI prefix, so /project1/index.html locates its resources using a URI like /project1/foo.css. Alternatively, use a common build directory for resource files and collect them together in the build scripts.
However, if you must keep the resource files in separate directories and use the same URI prefix to reference them, the Nginx try_files directive can search directories sequentially until a matching filename is found.
For example:
root /data/adpop;
location /static/ {
try_files
/possible-malware-domains-viewer$uri
/punycode-domains-viewer$uri
=404;
}
The URI /static/css/foo.css will be searched for first at /data/adpop/possible-malware-domains-viewer/static/css/foo.css and then at /data/adpop/punycode-domains-viewer/static/css/foo.css. And if neither file is found, a 404 status is returned.
See this document for details.
I may be twisting things about horribly, but... I was given a ReactJS application that has to be served out to multiple sub-domains, so
a.foo.bar
b.foo.bar
c.foo.bar
...
Each of these should point to a different instance of the application, but I don't want to run npm start for each one - that would be a crazy amount of server resources.
So I went to host these on S3. I have a bucket foo.bar and then directories under that for a b c... and set that bucket up to serve static web sites. So far so good - if I go to https://s3.amazonaws.com/foo.bar/a/ I will get the index page. However most things tend to break from there as there are non-relative links to things like /css/ or /somepath - those break because they aren't smart enough to realize they're being served from /foo.bar/a/. Plus we want a domain slapped on this anyway.
So now I need to map a.foo.bar -> https://s3.amazonaws.com/foo.bar/a/. We aren't hosting our domain with AWS, so I'm not sure if it's possible to front this with CloudFront or similar. Open to a solution along those lines, but I couldn't find it.
Instead, I stood up a simple nginx proxy. I also added in forcing to https and some other things while I had the proxy, something of the form:
server {
listen 443;
server_name foo.bar;
ssl on;
ssl_certificate /etc/pki/tls/certs/server.crt;
ssl_certificate_key /etc/pki/tls/certs/server.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
# Redirect (*).foo.bar to (s3bucket)/(*)
location / {
index index.html index.htm;
set $legit "0";
set $index "";
# First off, we lose the index document functionality of S3 when we
# proxy requests. So we need to add that back on to our rewrites if
# needed. This is a little dangerous, probably should find a better
# way if one exists.
if ($uri ~* "\.foo\.bar$") {
set $index "/index.html";
}
if ($uri ~* "\/$") {
set $index "index.html";
}
# If we're making a request to foo.bar (not a sub-host),
# make the request directly to "production"
if ($host ~* "^foo\.bar") {
set $legit "1";
rewrite /(.*) /foo.bar/production/$1$index break;
}
# Otherwise, take the sub-host from the request and use that for the
# redirect path
if ($host ~* "^(.*?)\.foo\.bar") {
set $legit "1";
set $subhost $1;
rewrite /(.*) /foo.bar/$subhost/$1$index break;
}
# Anything else, give them foo.bar
if ($legit = "0") {
return 302 https://foo.bar;
}
# Peform the actual proxy forward
proxy_pass https://s3.amazonaws.com/;
proxy_set_header Host s3.amazonaws.com;
proxy_set_header Referer https://s3.amazonaws.com;
proxy_set_header User-Agent $http_user_agent;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Accept-Encoding "";
proxy_set_header Accept-Language $http_accept_language;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
sub_filter google.com example.com;
sub_filter_once off;
}
}
This works - I go to a.foo.bar, and I get the index page I expect, and clicking around works. However, part of the application also does an OAuth style login, and expects the browser to be redirected back to the page at /reentry?token=foo... The problem is that path only exists as a route in the React app, and that app isn't loaded by a static web server like S3, so you just get a 404 (or 403 because I don't have an error page defined or forwarded yet).
So.... All that for the question...
Can I serve a ReactJS application from a dumb/static server like S3, and have it understand callbacks to it's routes? Keep in mind that the index/error directives in S3 seem to be discarded when fronted with a proxy the way I have above.
OK, there was a lot in my original question, but the core of it really came down to: as a non-UI person, how do I make an OAuth workflow work with a React app? The callback URL in this case is a route, which doesn't exist if you unload the index.html page. If you're going directly against S3, this is solved by directing all errors to index.html, which reloads the routes and the callback works.
When fronted by nginx however, we lose this error->index.html routing. Fortunately, it's a pretty simple thing to add back:
location / {
proxy_intercept_errors on;
error_page 400 403 404 500 =200 /index.html;
Probably don't need all of those status codes - for S3, the big thing is the 403. When you request a page that doesn't exist, it will treat it as though you're trying to browse the bucket, and give you back a 403 forbidden rather than a 404 not found or something like that. So in this case a response from S3 that results in a 403 will get redirected to /index.html, which will recall the routes loaded there and the callback to /callback?token=... will work.
You can use Route53 to buy domain names and then point them toward your S3 bucket and you can do this with as many domains as you like.
You don't strictly speaking need to touch CloudFront but it's recommended as it is a CDN solution which is better for the user experience.
When deploying applications to S3, all you need to keep in mind is that the code you deploy to it is going to run 100% on your user's browser. So no server stuff.
I am making a next.js app and I've spent hours searching for a very simple way to password protect the app. (It will be used by a small group of friends)
I have tried using Nginx's http auth on my reverse proxy, and that works but it could be annoying to have to sign in all the time as the login doesn't persist long. (Nginx's http auth seems to 'logout' or forget the authorization very quickly)
I also don't want to dive into something as complicated as NextAuth. I don't want user signups, custom views etc, etc.
I just want people to be able to enter one password to view the site. And I would like it to persist on their browser so they wouldn't have to log in all the time with Nginx's http auth.
Is there a way to give users a cookie once they pass the http auth, and then allow them in once they have the cookie?
Can anyone suggest a fairly simple solution? Thanks in advance.
You can do that with the Nginx map directive, which lets you set a variable based upon another variable.
Inside your html block somewhere but outside any server blocks you set up your map directive
map $cookie_trustedclient $mysite_authentication {
default "Your credentials please";
secret-cookie-value off;
}
Whats happening here is Nginx is setting the value of the custom variable $mysite_authentication based upon the value of the cookie named trustedclient.
By default $mysite_authentication will be set to Your credentials please, unless you have a cookie named trustedclient with a value of secret-cookie-value, in which case $mysite_authentication will be set to off
Now within the location block that you have enabled basic auth you change your auth_basic directive to use the new variable, like this:
location /secretfiles {
auth_basic $mysite_authentication;
auth_basic_user_file ....
add_header Set-Cookie "trustedclient=secret-cookie-value;max-age=3153600000;path=/";
}
You can set the cookie here or within your website code. The result being the auth_basic directive gets set to off for people with the right cookie, or the display message for the password box for people without it.
Not super secure, but easy and good enough for most things.
Edit from your config:
# Map block can go here
map $cookie_trustedclient $mysite_authentication {
default "Your credentials please";
secret-cookie-value off;
}
server {
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
# ssl on; # Delete this line, obsolete directive
ssl_certificate /etc/nginx/cloudflare-ssl/certificate.pem;
ssl_certificate_key /etc/nginx/cloudflare-ssl/key.key;
ssl_client_certificate /etc/nginx/cloudflare-ssl/cloudflare.crt;
ssl_verify_client on;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name ********.com;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
# auth_basic "Restricted Content"; # Now this becomes:
auth_basic $mysite_authentication;
auth_basic_user_file /etc/nginx/.htpasswd;
add_header Set-Cookie "trustedclient=secret-cookie-value;max-age=3153600000;path=/";
}
}