I am using Nginx for my ReactJs project and used the below configurations to allow browser to cache only images not other files(HTML, JS & CSS). It is working fine for me. But some clients are facing cache issues. Latest html is loaded immediately. But they are getting old JS bundle file. I am using webpack to generate a production bundle. In the html, the bundle code is reflected immediately. Please check the sample HTML & Nginx configuration files below.
HTML:
<!DOCTYPE html>
<html lang="en">
<head>
...
<link href="css/main.css?76m85vt7qo00000" rel="stylesheet">
...
</head>
<body>
<div id="app"></div>
<script type="text/javascript" src="js/bundle.js?438400c3459a72d63b87"></script>
</body>
</html>
NGINX CONF:
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
##
# Virtual Host Configs
##
server {
listen 80;
listen [::]:80;
root /var/www/html;
index index.html index.htm;
server_name *.mydomain.com;
location / {
add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
add_header Pragma 'no-cache';
expires off;
add_header Last-Modified $date_gmt;
if_modified_since off;
etag off;
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri /index.html;
}
location ~* \.(jpg|jpeg|png|gif|ico)$ {
expires 30d;
log_not_found off;
add_header Pragma public;
add_header Cache-Control "public";
}
location ~* \.(js|css)$ {
expires -1;
add_header Cache-Control 'no-store';
add_header Last-Modified $date_gmt;
}
}
}
RESPONSE HEADER:
Note: This problem is not happening for all the time for the all users. This problem is resolved automatically by itself after some minutes or hours.
I have cleared the browser cache and tried again. No luck. Failed to get the latest version. How to instruct the Nginx to always serve the latest JS file? I have researched a lot in google but failed to get the reason & solution. Please anyone provide me the way to resolve this. Thanks in advance...
I was led down the "Nginx Cache Webpack Bundle" rabbit hole, only to realize that I had configured nginx to serve the build from an old build directory. So, even though I had the updated bundle and assets compiling correctly, it was still pointing to the old versions of them in the old directory.
It's not the direct solution to the OP's cache invalidation question (looks like unregistering service workers worked there), but an easy sanity-check for anyone else who stumbles up on this
Related
I used vite to create a react app with typescript and I follow a tutorial to get a good starter (video). I have another application in angular and it works fine with this approach I'm deploying it to kubernetes using ngix, but with vite i'm facing this error and I don't know the cause:
plint2dev.linguaserve.net/:16 Refused to apply style from 'https://plint2dev.linguaserve.net/assets/index.2518dafb.css' because its MIME type ('text/html') is not a supported stylesheet MIME type, and strict MIME checking is enabled.
index.f2ba2231.js:1
Failed to load module script: Expected a JavaScript module script but the server responded with a MIME type of "text/html". Strict MIME type checking is enforced for module scripts per HTML spec.
My nginx config and the repository code can be revised here. I tried all the alternatives provided here, but no one works for me.
My actual nginx.conf:
pid /run/nginx.pid;
worker_processes auto;
worker_rlimit_nofile 65535;
events {
multi_accept on;
worker_connections 65535;
}
http {
client_max_body_size 100M;
charset utf-8;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
server_tokens off;
# MIME
include mime.types;
default_type application/octet-stream;
# logging
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log warn;
# load configs
include /etc/nginx/conf.d/*.conf;
# linguaserve.net
server {
listen 80;
listen [::]:80;
server_name .linguaserve.net;
set $base /usr/share/nginx/html;
# security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header X-Request-ID $request_id;
location / {
try_files $uri $uri/ /index.html;
}
# . files
location ~ /\.(?!well-known) {
deny all;
}
# logging
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log warn;
# favicon.ico
location = /favicon.ico {
log_not_found off;
access_log off;
}
# gzip
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css text/xml application/json application/javascript application/rss+xml application/atom+xml image/svg+xml;
}
}
From here, I'm totally lost. Any help would be appreciated.
My dockerfile
FROM nginx
## Remove default Nginx website
RUN rm -rf /usr/share/nginx/html/*
## Copy our default nginx config
COPY nginx/default.conf /etc/nginx/nginx.conf
COPY /dist/* /usr/share/nginx/html/
RUN cd /usr/share/nginx
RUN ln -s /usr/share/nginx/html /usr/share/nginx/www
RUN ln -s /usr/share/nginx/html /etc/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Finally, i solved it. It was a silly mistake. In my dockerfile, I copied all the files from dist, but no the directory structure.
Before:
dist
│ index.12321.js
│ index.12321.css
│ index.html
│ vite.svg.
Now:
dist
└───assests
│ │ index.12321.js
│ │ index.12321.css
│ index.html
│ vite.svg.
The solution was change this command
COPY /dist/* /usr/share/nginx/html/
to this
COPY /dist/ /usr/share/nginx/html/
Removing the * solved the pain.
I cloned Directus 8 from github. I run it in my local server. It worked fine without any problems.
Then I uploaded code to AWS Elastic Beanstalk (PHP, apache). but it showed 500 Internal Server Error.
error log: /var/www/html/directus/public/.htaccess: <IfModule not allowed here
I added .ebextensions/setup.config file to my root folder, like this.
files:
"/etc/httpd/conf.d/enable_mod_rewrite.conf":
mode: "644"
owner: root
group: root
content: |
AllowOverride All
but my Beanstalk said Unsuccessful command execution on instance id(s) 'i-0f6...'. Aborting the operation. and went to degrading state.
How to fix this?
This answer is for Directus 8 (PHP)
Tried almost all ways of apache settings using .ebextensions and .platform nothing worked.
Then tried NGINX with custom .platform configs. It worked. Answering the steps which I did, may be helpful to someone else, who has the same problem
Directus docs has some configs for NGINEX, go through it
create nginex.conf file under .platform/nginx folder
we are going to replace existing nginex.conf inside the beanstalk. copy existing nginex.conf using ssh to ec2 instance and add the custom configs mentioned in the docs and paste it to our newly created .platform/nginx/nginex.conf
below is my custom .platform/nginx/nginex.conf
user nginx;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
worker_processes auto;
worker_rlimit_nofile 32136;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
include conf.d/*.conf;
map $http_upgrade $connection_upgrade {
default "upgrade";
}
server {
listen 80 default_server;
location / {
try_files $uri $uri/ /index.php?$args;
}
location /admin {
try_files $uri $uri/ /admin/index.html?$args;
}
location /thumbnail {
try_files $uri $uri/ /thumbnail/index.php?$args;
}
# Deny direct access to php files in extensions
location /extensions/.+\.php$ {
deny all;
}
# All uploads files (originals) cached for a year
location ~* /uploads/([^/]+)/originals/(.*) {
add_header Cache-Control "max-age=31536000";
}
# Serve php, html and cgi files as text file
location ~* /uploads/.*\.(php|phps|php5|htm|shtml|xhtml|cgi.+)?$ {
add_header Content-Type text/plain;
}
# Deny access to any file starting with .ht,
# including .htaccess and .htpasswd
location ~ /\.ht {
deny all;
}
# pass PHP scripts to FastCGI server
location ~ \.php$ {
fastcgi_pass unix:/var/run/php-fpm/www.sock;
fastcgi_index index.php;
include /etc/nginx/fastcgi.conf;
}
access_log /var/log/nginx/access.log main;
# Include the Elastic Beanstalk generated locations
include conf.d/elasticbeanstalk/*.conf;
}
}
done, when we uploading it, beanstalk will automatically replace our custom nginex.conf with existing nginex.conf. (note: we can add the changes only instead of replacing, but it didn't work at the time I tried)
How can I deploy these two together, I don't like the Laravel React preset, I want to separate both, bundle the React app and deploy them together with any web server (apache, nginx...)
EDIT
This is my config for Laravel, but it isn't loading the routes
server {
listen 8000;
server_name 127.0.0.1
root "..\..\Proyecto\Backend\JWT\public";
add_header 'Access-Control-Allow-Origin' '*';
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
index index.html index.htm index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ /\.(?!well-known).* {
deny all;
}
}
You can run them separately using nginx
you run each on separate ports and use methods (POST/GET) to push/get data
use pm2 (http://pm2.keymetrics.io/) for running React (I recommend it because you can monitor the activity of the react app and if you want to do maintenance you can stop the current app process and run a "under maintenance" app process)
you can read more about running laravel on nginx here (https://www.digitalocean.com/community/tutorials/how-to-deploy-a-laravel-application-with-nginx-on-ubuntu-16-04)
as for running react without pm2, you have to build the project yarn build and tell nginx that the file you want to load is the index.html inside of the build file
assuming that you are using an ubuntu server and you uploaded your code to github or gitlab
server {
listen 50;
root /var/www/[Your repo name]/build;
server_name [your.domain.com] [your other domain if you want to];
index index.html;
location / {
try_files $uri /index.html;
}
}
you write this inside of your nginx configuration along with the laravel configuration on a separate port
hope my answer helped a bit
This was proving to be very tricky and it took me at least 3 days to put everything together. Here is what you have to do.
Run
npm run build in the react project.
Copy the contents of the build folder to the server
scp react_project/build/* <server name or ip>:/var/www/html/react
Change the ownership of the project folders to the user www-data or add your user id to the group www-data.
Now. set up the Laravel project in a different directory (in /var/www/html/laravel, for example).
Set up the database, environment variables.
Run
php artisan key:generate
php artisan config:clear
php artisan config:cache
Now, proceed with nginx configuration. Create 2 configs for react and laravel projects as given below. Make sure that the listen ports are different for both projects.
Create configuration files for react and laravel projects under /etc/nginx/sites-available
Create symlinks to the created configs under /etc/nginx/sites-enabled as given below
sudo ln -s /etc/nginx/sites-available/react_conf /etc/nginx/sites-enabled/react_conf
sudo ln -s /etc/nginx/sites-available/laravel_conf /etc/nginx/sites-enabled/laravel_conf
And for the contents,
react_conf:
server {
listen 80;
server_name <server_ip or hostname>;
charset utf-8;
root /var/www/html/react;
index index.html index.htm;
# Always serve index.html for any request
location / {
root /var/www/html/react;
try_files $uri /index.html;
}
error_log /var/log/nginx/react-app-error.log;
access_log /var/log/nginx/react-app-access.log;
}
laravel_conf:
server {
listen 90;
server_name <server ip or hostname>;
charset utf-8;
root /var/www/html/laravel/public;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
index index.php index.html index.htm;
# Always serve index.html for any request
location /api {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ /\.(?!well-known).* {
deny all;
}
error_log /var/log/nginx/laravel-app-error.log;
access_log /var/log/nginx/laravel-app-access.log;
}
Now, delete the default config present in /etc/nginx/sites-enabled
Also, verify that /etc/nginx/nginx.conf contains the following include directive where the server configs are expected(under http)
include /etc/nginx/sites-enabled/*;
Verify that the config is fine by running
sudo nginx -t
Restart the server
sudo service nginx restart
Now, you should be up and running.
You can approach it by two ways .
First one is when the you are creating react-app in different folder than the laravel project folder . In such case just deploy laravel app and react app in two different url .
The second condition is when the react-app is inside the laravel app . In such case build the react project and put the dist folder in views folder of the laravel project . So in routes/web.php add this
//Used for handling the html file of react project
View::addExtension('html', 'php');
Route::get('/{any}', function () {
//path to dist folder index.html inside the views directory
return view('build/index');
})->where('any', '.*');
Laravel will not server the required js and css file from inside the views folder . So you need copy and paste all the content of the dist folder to public folder of the laravel project . No need to copy paste index.html file but other file need to placed in the pubic folder .
After that visit the root url of the laravel project in the browser the react app should be working
So I have been pulling my hair out for a couple of days now. I have a backend server, using Spring-boot with Rest API
This server is called from a frontend interface using AngularJS, also handled by Nginx.
Everything is running locally. Whenever I try to make a request from the fronted to the backend, I get the following error:
I know what you think: Easy, just add add_header 'Access-Control-Allow-Origin' 'http://[MY_IP]'; to your nginx.conf file on the backend, and everything will work, like here. or here.
But it doesn't. I tried everything, moving it to different locations, putting a '*' instead of the address, enabling and disabling SSL... The only thing that works is when I manually disable Cross-Origin restrictions in the browser. And the best part is that when I do disable those restrictions, I can see the Access-Control-Allow-Origin header set to http://[MY_IP] in the debug console of my browser!
Any idea of what might be going wrong?
Here is my nginx.conf file:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Here is my /etc/nginx/sites-enabled/default.conf file:
upstream backend_api {
server 10.34.18.2:8080;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /var/www/html/;
index index.html;
client_max_body_size 5M;
location /todos {
access_log /var/log/nginx/todos.backend.access.log;
error_log /var/log/nginx/todos.backend.error.log;
proxy_set_header HOST $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://backend_api;
}
location / {
access_log /var/log/nginx/todos.frontend.access.log;
error_log /var/log/nginx/todos.frontend.error.log;
try_files $uri $uri/ =404;
}
}
I create a symbolic link:
ln -s /etc/nginx/sites-available/default /etc/nginx/sites-enabled/
I am not sure if this will make the cors work, but it may help getting near to the solution:
Access-Control-Allow-Origin: http://[MY_IP] is not the only header you need to take care of.
As the Content-Type is application/json and also, you are not using non simple headers, you will also have to give specific permission to Content-Type header, the same with Accept-Encoding and DNT
Access-Control-Allow-Headers: Content-Type, Accept-Encoding, DNT
I am not sure about this one for this specific GET, but in any case also the allowed methods:
Access-Control-Allow-Methods: GET
And if you are sending cookies, authorization header, or cliente certificates for authentication:
Access-Control-Allow-Credentials: true
I don't think it is your current case, but please note that returning Access-Control-Allow-Credentials: true and blindly replicating the received Origin in the Access-Control-Allow-Origin response, enables any site to access your server impersonating the owner of the credentials.
And just in case you are tempted, ACAO: * with ACAC: true will not work as per specification.
You may also have to take care of the OPTIONS method called during a cors preflight which shold be in line with what the actual call will respond.
And remember that not returning one of these headers is the way to deny it.
Ref: CORS - MDN
The mouthful of a title says it all:
We've got an Angular frontend with a Django backend providing a REST API that is exposed independently at endpoints example.com/api/v1/*
The Angular app runs in HTML5 mode, and we want hard-links to example.com/foo/bar to bring users into the app at the foo.bar state as if it were a static page rather than an app state (where foo is anything but api).
We're running behind nginx,and our basic strategy in the conf was to define locations at ^~ /scripts, /images etc. for serving static content directly, as well as a ^~ /api/* location that gets routed to django. Below that, we have a location ~ ^/.+$ that matches any path not matched by any of the above and "sends it to Angular" - i.e. serves our index page to it and appends the path to the base url, allowing our angular router to handle it from there.
This is our conf in full:
upstream django {
server 127.0.0.1:8000 fail_timeout=0;
}
server {
listen 80;
server_name example.com;
return 301 https://example.com$request_uri;
}
server {
listen 443;
server_name example.com;
client_max_body_size 10M;
ssl on;
ssl_certificate /etc/ssl/thawte/example_com.crt;
ssl_certificate_key /etc/ssl/thawte/example_com.key;
ssl_verify_depth 2;
gzip on;
gzip_types text/plain text/html application/javascript application/json;
gzip_proxied any;
index index.html;
location ^~ /index.html {
gzip_static on;
root /www/dist;
}
location ^~ /images/ {
expires max;
root /www/dist;
}
location ^~ /scripts/ {
expires max;
gzip_static on;
root /www/dist;
}
location ^~ /favicon.ico {
expires max;
root /www/dist;
}
location ^~ /api {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto https;
proxy_redirect off;
proxy_pass http://django;
}
//Send anything else to angular
location ~ ^/.+$ {
rewrite .* /index.html last;
}
}
This has worked perfectly for us, but we now need to set it up to work with prerender.io. We've tried doing this several ways, making modifications on the official prerender nginx example, but none have worked - crawlers are getting the same code users are rather than cached pages.
How can we get this working?
(note: this is new territory for everyone involved here, so if the best way to handle this involves making different choices a few steps back, please suggest them)
So it turns out the config posted above was working the whole time.
I realized this when it finally occurred to me to try putting https://example.com/anything through the crawler debugger (instead of https://example.com, which is all I had been testing previously), and it worked - the crawler was served the cached page as expected.
This was simply because the greedy quantifier in:
location ~ ^/.+$ {
did not match the empty string. With an additional
location = / {
try_files $uri #prerender;
}
, my conf is working as expected.
Hopefully the handprint on my forehead d only been putting https://example.com through the crawler debugger - which was not working.
On the upside, I'm thinking I can turn this handprint on my forehead into a nice Halloween costume next weekend....
Still not sure I've gone about this the best way, and welcome alternative suggestions.