CORS problem with Angularjs + Spring-boot + nginx - angularjs

So I have been pulling my hair out for a couple of days now. I have a backend server, using Spring-boot with Rest API
This server is called from a frontend interface using AngularJS, also handled by Nginx.
Everything is running locally. Whenever I try to make a request from the fronted to the backend, I get the following error:
I know what you think: Easy, just add add_header 'Access-Control-Allow-Origin' 'http://[MY_IP]'; to your nginx.conf file on the backend, and everything will work, like here. or here.
But it doesn't. I tried everything, moving it to different locations, putting a '*' instead of the address, enabling and disabling SSL... The only thing that works is when I manually disable Cross-Origin restrictions in the browser. And the best part is that when I do disable those restrictions, I can see the Access-Control-Allow-Origin header set to http://[MY_IP] in the debug console of my browser!
Any idea of what might be going wrong?
Here is my nginx.conf file:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Here is my /etc/nginx/sites-enabled/default.conf file:
upstream backend_api {
server 10.34.18.2:8080;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /var/www/html/;
index index.html;
client_max_body_size 5M;
location /todos {
access_log /var/log/nginx/todos.backend.access.log;
error_log /var/log/nginx/todos.backend.error.log;
proxy_set_header HOST $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://backend_api;
}
location / {
access_log /var/log/nginx/todos.frontend.access.log;
error_log /var/log/nginx/todos.frontend.error.log;
try_files $uri $uri/ =404;
}
}
I create a symbolic link:
ln -s /etc/nginx/sites-available/default /etc/nginx/sites-enabled/

I am not sure if this will make the cors work, but it may help getting near to the solution:
Access-Control-Allow-Origin: http://[MY_IP] is not the only header you need to take care of.
As the Content-Type is application/json and also, you are not using non simple headers, you will also have to give specific permission to Content-Type header, the same with Accept-Encoding and DNT
Access-Control-Allow-Headers: Content-Type, Accept-Encoding, DNT
I am not sure about this one for this specific GET, but in any case also the allowed methods:
Access-Control-Allow-Methods: GET
And if you are sending cookies, authorization header, or cliente certificates for authentication:
Access-Control-Allow-Credentials: true
I don't think it is your current case, but please note that returning Access-Control-Allow-Credentials: true and blindly replicating the received Origin in the Access-Control-Allow-Origin response, enables any site to access your server impersonating the owner of the credentials.
And just in case you are tempted, ACAO: * with ACAC: true will not work as per specification.
You may also have to take care of the OPTIONS method called during a cors preflight which shold be in line with what the actual call will respond.
And remember that not returning one of these headers is the way to deny it.
Ref: CORS - MDN

Related

NGINX/Gunicorn - React + Flask (API) + Debian + SSL | API Communication Error

Hive -
I have a Flask + React app running on Debian using Nginx and Gunicorn (which is maintained via supervisor). When I set up my Nginx CONF file to just serve up the site over port 80, everything seemed to work fine except for a CORS error. This only happened in Chrome, but the entire site worked fine in Safari (a known Chrome issue). After tracking down the issue and determining that the cause was the lack of an SSL certificate, I set up my Nginx CONF file to support SSL. Now two things happen that frustrate me to no end:
When I go to the site, the Developer Console shows that the site is getting a Connection Refused on https://localhost:5000/.
When I use CURL to test the API, it works.
Both the React and Flask applications are hosted on the same server, and I even have port 5000 open to be safe (as well as SSL and standard 80).
My conf file is below, but some info that might be useful:
All URLs are served up at the root "domain.com/" of the website.
The Flask app has the API in a nested folder and the exposed API is of the format "domain.com/api/v1/{calls}".
I have UI Swagger installed, but can not access it from the browser.
The development environment works fine, but that is because I'm using the built-in Python/Flask server and running the frontend React app with NPM Start.
My code is below and I've left in place, as commented lines, other things I have tried to make this work to show the various efforts I've exerted. In the location /api section, I previously just had the include proxy_params and the proxy_pass...all other lines were added after they didn't work in the location/section. server_name has _ as the server_name. I also had this as the subdomain of my site and mixed & matched, but no dice.
server {
root /var/www/project-app/frontend/build;
index index.html;
server_name _;
location / {
# proxy_pass http://localhost:5000;
# proxy_redirect off;
# proxy_set_header Host $host;
# proxy_set_header X-Real-IP $remote_addr;
# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
try_files $uri $uri/ =404;
add_header Cache-Control "no-cache";
}
location /static {
alias /var/www/project-app/frontend/build/static;
expires 1y;
add_header Cache-Control "public";
}
location /api {
include proxy_params;
proxy_pass http://localhost:5000;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
listen 443 ssl; # managed by Certbot
listen [::]:443 ssl;
ssl_certificate /etc/letsencrypt/live/project/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/project/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
access_log /var/log/project_access.log;
error_log /var/log/project_access.log;
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 128;
}
server {
if ($host = app.project.tld) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
# server_name app.project.tld;
server_name _;
return 404; # managed by Certbot
}
Debian 11
Latest Nginx Version from APT
All packages are updated and installed via both PIP and NPM
Python 3.9.2
There are fillers in the code...such as app.project.tld...in my actual code, I have the subdomain of the actual app. Just trying to avoid someone telling me I made a copy/paste snafu :)
Thanks!

how to check if my nginx server has x-frame disabled

I'm doing a port 80 redirect with namecheap : i'm doing mydomain.com to redirect to my server 400.300.200.100:myport. myport is not 80 but another port.
Now namecheap is stating "If the server (you are redirecting the domain to) has X-Frame feature disabled, you may select a Masked Redirect for the client's browser to display your domain name instead of http://1.2.3.4:50."
I would like my domain to be displayed instead of myserver:port. So where should i check if I have x frame disabled? in my react frontend ? in my nginx configuration?
Should I put
X-Frame-Options: DENY
or
X-Frame-Options: SAMEORIGIN
?
Can someone tell me if i need to configure this on nginx?
this is my nginx.conf
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html/storybook-static;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
location /wagtail {
proxy_pass http://172.20.128.2:8000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Script-Name /wagtail;
}
location /static/ {
alias /app/static/;
}
location /media/ {
alias /app/media/;
}
}
For security reasons usually an xframe 'same origin' option is used.
Go to where Nginx is installed and then a conf folder
check the following parameter in nginx.conf under server section
add_header X-Frame-Options.
Another way is to go to your Nginx installation directory, if using Linux and run
grep -rnw 'X-Frame'
It will show you all files with header traces.

Nginx + React app with router+ Chrome = subfolders do not work

I have a react app running under nginx. App runs just fine and there are no problems.
Now, I have kibana and portainer running on the same server and I configured nginx to run them as a subfolder. The server have a security certificate and I can't really create new sub-domains. So I had to go with the subfolders.
server {
listen 80;
listen 443 ssl;
server_name api.nec.private.systems;
ssl_certificate /etc/ssl/api.nec.private.systems.crt;
ssl_certificate_key /etc/ssl/api.nec.private.systems.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
root /usr/share/nginx/html;
location / {
# Set path
try_files $uri /index.html;
}
# Do not cache sw.js, required for offline-first updates.
location /sw.js {
add_header Cache-Control "no-cache";
proxy_cache_bypass $http_pragma;
proxy_cache_revalidate on;
expires off;
access_log off;
}
location /control/ {
proxy_pass http://portainer:9000/;
add_header Cache-Control "no-cache";
proxy_cache_bypass $http_pragma;
proxy_cache_revalidate on;
expires off;
access_log off;
}
location /kibana/ {
proxy_pass http://kibana:5601/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
# proxy_cache_bypass $http_upgrade;
add_header Cache-Control "no-cache";
proxy_cache_bypass $http_pragma;
proxy_cache_revalidate on;
expires off;
access_log off;
}
}
As you can see the first two locations describe the react app and the last two are all about kibana and poratiner.
Now, here is the problem:
I would open google chrome and go to api.nec.private.systems/control - it would pull up portainer without any problems.
I would open api.nec.private.systems/kibana and would get kibana as expected.
I would open api.nec.private.systems/ and the react app with the react router would open.
Now, having done step #3 I would open api.nec.private.systems/kibana and it won't open kibana anymore, rather it would try to feed /kibana in my router. It WON'T open kibana at all. No matter how much I would try.
Step number X - clear cache of google chrome and try again - kibana and poirtainer works just fine. Until I'll open the react app.
Any ideas?
Ok, so I figured out my own problem here. It's all because of the service worker that comes with create-react-app. Basically, service worker trying to redirect all requests from the app to the local storage cache.
Killed the service worked and it started working fine.

No 'Access-Control-Allow-Origin' header for Grafana

I'm trying to setup Grafana on top of nginx. Here's how my current setup is. Grafana is supposed to talk to both graphite and elastic search on the same server.
Here's my nginx configuration file. I'm not sure what's wrong in this configuration:
#graphite server block
server {
listen 8080 ;
access_log /var/log/nginx/graphite.access.log;
error_log /var/log/nginx/graphite.error.log;
location / {
include uwsgi_params;
uwsgi_pass 127.0.0.1:3031;
}
}
#grafana server block
server {
listen 9400;
access_log /var/log/nginx/grafana.access.log;
error_log /var/log/nginx/grafana.error.log;
location / {
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd;
add_header Access-Control-Allow-Origin 'http://54.123.456.789:9400';
add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE';
add_header 'Access-Control-Allow-Headers' 'Authorization, Content-Type, origin, accept';
add_header 'Access-Control-Allow-Credentials' 'true';
root /usr/share/grafana;
}
}
Now, whenever I try to run Grafana, it gives me the following error:
XMLHttpRequest cannot load http://54.123.456.789:8080/render. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://54.123.456.789:9400' is therefore not allowed access.
Can someone please help me out in this? Thanks in advance.
Try putting the four lines of Access-Control-Allow-* in the configuration of the graphite server.
To my mind, grafana is asking graphite and that's graphite who has to allow Grafana.
Ok I wasn't specifically setting up Graphana, but I was intending CORS to work with the auth_basic directive from nginx because such directive overrides any headers that you had before whenever authentication is required (When the server returns a 401 basically)
So after a copule hours of research I found this Gist: https://gist.github.com/oroce/8742704 which is specifically targetted to Graphana and possibly gives a complete answer to this question.
BUT for my particular purposes, which again were to combine auth_basic with CORS headers via add_header, my take away from that Gist is the following:
Your server location should follow a structure like the one below:
location / {
proxy_pass <PROXY_PASS_VALUE>;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Any additional headers and proxy configuration for the upstream...
# Remove the CORS Origin header if set by the upstream
proxy_hide_header 'Access-Control-Allow-Origin';
# Add our own set of CORS headers
# The origin specifically, when using ith with authentication CANNOT be set to * as per the spec, it must return 1 and only 1 value so to mimic "*"'s behavior we mirror the origin
add_header Access-Control-Allow-Origin $http_origin;
add_header Access-Control-Allow-Methods
'GET,POST,PUT,DELETE,OPTIONS';
add_header Access-Control-Allow-Headers 'Authorization';
add_header Access-Control-Allow-Credentials 'true';
if ( $request_method = 'OPTIONS' ) {
# If request method is options we immediately return with 200 OK
# If we didn't do this then the headers would be overwritten by the auth_basic directive when Browser pre-flight requests are made
return 200;
}
# This should be set AFTER the headers and the OPTIONS methos are taken care of
auth_basic 'Restricted';
auth_basic_user_file <HTPASSD_FILE_PATH>;
}
Then when using this from a browser environment, you could issue the following:
fetch(
'<URL>',
{
method: 'POST',
body: <YOUR_BODY_OBJECT>,
// This must be set for BASIC Auth to work with CORS
credentials: 'include'
}
)
.then( response => response.json() )
.then( data => {
console.log( data );
} );

Bad Gateway when seting up NGINX as a reverse proxy server for GAE

I want to use NGINx as a reverse proxy server so I can open my GAE (google app engine) web site from china mainland, because there most of google IP's are blocked by the GFW.
DNS: I have those DNS records:-
A mydomain.com ==> x.x.x.x
CNAME www ==> ghs.google.com
CNAME * ==> ghs.google.com
I'm planing to use geo DNS to point to my reverse proxy in case the request is coming from china mainland, currently I'm testing locally by having hosts record points mydomain.com to localhost.
I have nginx 1.1.19 on Ubuntu 12.04.
my site configuration file is:-
server {
#listen 80;
listen 443 ssl;
server_name mydomain.com;
ssl on;
ssl_certificate /home/user/Desktop/ssl/mydomain.com.pem;
ssl_certificate_key /home/user/Desktop/ssl/mydomain.com.key;
ssl_session_timeout 5m;
ssl_protocols SSLv3 TLSv1;
ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP;
ssl_prefer_server_ciphers on;
large_client_header_buffers 4 16k;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
# keepalive_timeout 70;
location / {
proxy_pass https://mydomain.com/;
proxy_set_header Host www.mydomain.com;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Real-HOST $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy-Hostname $scheme://$http_host;
proxy_redirect off;
proxy_intercept_errors on;
#error_page 500 = /error_page.html;
}
#location = /error_page.html {
# root /local_path_to_static_files_root;
#}
}
When I tried to open https: //mydomain.com:-
I got a number of connection is too low error at the beginning solve it by adding/editing the following to the nginx.conf file:-
events {
worker_connections 8024;
# multi_accept on;
}
then I got too many open files error, I solve it by adding/editing the following to the nginx.conf file:-
worker_rlimit_nofile 5000;
Now I'm getting error 504 Gateway Time-out (connection time out):-
Any idea what I'm doing or did wrong ??
UPDATE:
It turned to be infinite redirecting loop because I have mydomain.com ==> 127.0.0.1 in the hosts file and the reverse proxy pass the requests coming to it to mydomain.com so it keeps requesting it self, I removed the URL proxy passing the request to from hosts to avoid loops.
SOLVED

Resources