I can't fetch from external API in production - reactjs

I want to fetch some info but when I try to implement this to server (Ubuntu 18.04) with Nginx I can't fetch...
Put certificate to enable HTTPS to my domain.
Create a .env with a variable that contains the complete url to API (Because Im using a proxy in development)
Put some headers to the petition
Try to change the config in nginx
But nothing... my application only works running in localhost
axios.get(process.env.REACT_APP_API_URL) ...
The console of the browser (Safari):
Origin https://mysubdomain.com is not allowed by Access-Control-Allow-Origin.
XMLHttpRequest cannot load https://mysubdomain.com due to access control checks.
Failed to load resource: Origin https://mysubdomain.com is not allowed by Access-Control-Allow-Origin.

You Server needs to return below header value
Access-Control-Allow-Origin: *
which means anyone can connect to API.
Work Around
Go to chrome folder.
chrome.exe --user-data-dir="<Some directory name to store temporary chrome data>" --disable-web-security

I'm not expert in nginx but this works!
I edit my site file in /etc/nginx/sites-available/mysite like this:
location /anyAppLocation/ {
proxy_method GET;
proxy_pass_request_headers on;
proxy_pass https://api.site.com;
proxy_redirect default;
}

Related

Nginx is redirecting subdomain to main domain

I have two domains,
zerp.io (ssl installed)
app.zerp.io (only http)
in zerp.io (main domain) a wordpress website is hosted and is working fine. I am trying to deploy a React app on app.zerp.io using nginx. I deleted the default file and created new file app.zerp.io at /etc/nginx/sites-available/ I also created same file at /etc/nginx/sites-enabled/ and created a symlink between them. I checked the DNS entry, app.zerp.io and www.app.zerp.io is pointing to the public Ip of the correct server where React App resides.
Here's my /etc/nginx/sites-available/app.zerp.io file
server {
listen 80;
index index.html index.htm index.nginx-debian.html;
server_name www.app.zerp.io app.zerp.io;
location / {
proxy_pass localhost:3000;
proxy_ser_header host $host;
}
}
The problem is, whenever I try to reach http://app.zerp.io through web browser it redirects me to https://zerp.io. Here's what I did so far,
I checked DNS using an online tool, its correctly pointing to the server
I did not use any 301 redirects in the configuration file as you can see above
when I try curl app.zerp.io from the production server (in Germany), sometimes it gives 200 with correct response and sometimes it gives 301 (moved permanently) crazy isn't it
When I try curl app.zerp.io from my local computer it always give me 301 although I do not have any 301 in my nginx config file
I thought, may be its a cache issue on my chrome, to my surprise no, I cleared the cache and hard reload, I even tried incognito mode with no success, it always redirect me to https://zerp.io
When I try curl app.zerp.io from my local computer using a VPS it correctly opens the website app.zerp.io.
I do not have any ssl certificate so there are not redirects from http to https in http://app.zerp.io
Its been two days, Its making me crazy, I am assuming it has something to do with DNS resolution. Can some please help me out

Varnish with Apache2 using mod_ssl and mod_proxy causing issues

I have installed the Varnish with Apach2 and setup that using the HTTP proxy apache module and used the headers to get the Data over HTTP and send it to HTTPS using reverse proxy.
ProxyPreserveHost On
ProxyPass / http://127.0.0.1:80/
ProxyPassReverse / http://127.0.0.1:80/
RequestHeader set X-Forwarded-Port “443”
RequestHeader set X-Forwarded-Proto “https
But the issue I am facing this setup is the Browser error Content is loading from HTTP over HTTPS has been blocked.
Mixed Content: The page at '' was loaded over HTTPS, but
requested an insecure stylesheet ''. This request has been
blocked; the content must be served over HTTPS.
Please help to understand where I am wrong and how can I make this work?
Thank you in Advance.
There's not a whole lot of context about the setup and the configuration, but based on the information you provided I'm going to assume you're using Apache to first terminate the TLS connection and then forward that traffic to Varnish.
I'm also assuming Apache is also configured as the backend in Varnish listening on a port like 8080 whereas Varnish is on 80 and the HTTPS Apache vhost is on 443.
Vary header
The one thing that might be missing in your setup is a cache variation based on the X-Forwarded-Proto header.
I would advise you to set that cache variation using the following configuration:
Header append Vary: X-Forwarded-Proto
This uses mod_headers and can either be set in your .htaccess file or your vhost configuration.
It should allow Varnish to be aware of the variations based on the Vary: X-Forwarded-Proto header and store a version for HTTP and one for HTTPS.
This will prevent HTTP content being stored when HTTPS content is requested and vice versa.
A good way to simulate the issue
If you want to make sure the issue behaves as I'm expecting it to, please perform a test using the following steps:
Clear your cache through sudo varnishadm ban obj.status "!=" 0
Run varnishlog -g request -q "ReqUrl eq '/'" to filter logs for the. homepage
Call the HTTP version of the homepage and ensure its stored in the cache
Capture the log output for this transaction and store it somewhere
Call that same page over HTTPS and check whether or not the mixed content errors occur
Capture the log output for this transaction and store it somewhere
Then fix the issue through the Vary: X-Forwarded-Proto header and try the testcase again.
In case of problems, just add the 2 log transactions to your question (1 for the miss, 1 for the hit) and I'll examine it for you

Deploying Client and Server to the same VM

I have an application that has a React frontend and a Python Flask backend. The frontend communicates with the server to perform specific operations and the server api should only be used by the client.
I have deployed the whole application (Client and Server) to an Ubuntu virtual machine. The machine only has specific ports open to the public (5000, 443, 22). I have setup Nginx configuration and the frontend can be access from my browser via http://<ip:address>:5000. The server is running locally on a different port, 4000, which is not accessible to the public as designed.
The problem is when I access the client app and I navigate to the pages that communicate with the server via http://127.0.0.1:4000 from the react app, I get an error saying connection was refused.
GET http://127.0.0.1:4000/ net::ERR_CONNECTION_REFUSED on my browser.
When I ssh into the vm and run the same command through curl curl http://127.0.0.1:4000/, I get a response and everything works fine.
Is there a way I can deploy the server in the same vm such that when I access the client React App from my browser, the React App can access the server without problems?
So after tinkering with this, I found a solution using Nginx. Summary is you run the server locally and use a different port say 4000 (not exposed to public), then expose your react app on the exposed port in this case 5000.
Then use a proxy in your Nginx config that redirects any call starting with api to the local host server running. See config below
server {
#Exposed port and servername ipaddress
listen 5000;
server_name 1.2.3.4 mydomain.com;
#SSL
ssl on;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
ssl_protocols TLSv1.2;
#Link to the react build
root /var/www/html/build;
index index.html index.htm;
#Error and access logs for debugging
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
try_files $uri /index.html =404;
}
#Redirect any traffic beginning with /api to the flask server
location /api {
include proxy_params;
proxy_pass http://localhost:4000;
}
}
Now this means you need to have all your server endpoints begin with /api/... and the user can also access the endpoint from the browser via http://<ip:address>:5000/api/endpoint
You can mitigate this by having your client send a token the server and the server will not run any commands without that token/authorization.
I found the solution here and modified it to fit my specific need here
Part two of solution
Other series in the solution can be found Part one of solution and Part three of solution

Hosting Docusaurus v2 using Nginx

I'm attempting to use nginx as the reverse proxy to host Docusaurus v2 on Google AppEngine.
GooglAppEngine has HTTPS turned on. And Nginx listens on port 8080. Hence by default all requests are over HTTPS and the connections managed by Google AppEngine.
However, I'm having an issue when users perform the following actions :
Reach the landing page
Go to documentations (any page).
Refresh the page.
The user is getting directed to port 8080 and not the https site of docusaurus.
Without refreshing the page, the user is able to successfully navigate the site. It's when the user hits a refresh button that they get the redirect. Looking at the header information, I see the response pointing them to port 8080 but I'm not sure why that is happening.
Wondering if anyone has successfully been able to set up Docusaurus v2 with nginx ?
My config for nginx is as follow :
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Logs will appear on the Google Developer's Console when logged to this
# directory.
access_log /var/log/app_engine/app.log;
error_log /var/log/app_engine/app.log;
gzip on;
gzip_disable "msie6";
server {
# Google App Engine expects the runtime to serve HTTP traffic from
# port 8080.
listen 8080;
root /usr/share/nginx/www;
index index.html index.htm;
location / {
if ($http_x_forwarded_proto = "http") {
return 301 https://$server_name$request_uri;
}
}
}
This is probably due to the docusaurus website linking to directories without trailing slash /, causing a redirect which is setup to include the port by default.
Looking into the docusaurus build directory you will see that your pages are defined as folders containing index.html files. Without the / the server needs to redirect you to {page}/index.html.
Try to call the URL with / and no port, which should be successful:
https://{host}/docs/{page}/
Therefore fixing the problem, you could try to change the redirect rules to not include the port with the port_in_redirect parameter:
server {
listen 8080;
port_in_redirect off;
# More configuration
...
}
See the documentation for more details.

Let's Encrypt / Certbot in Google App Engine -> can't check challenge -> Forbidden 403

Using Google App Engine and Let's Encrypt or Certbot, I'm trying to issue a certificate to my web app, but when the challenge is to be tested, the file hosted in /.well-known/acme-challenge/ can't be acessed because (apparently of nginx configuration that prohibits access to dot paths), in other words, it gets a 403 - Forbidden page instead of the key.
I've already tried to change nginx.conf with this:
location ^~ /.well-known/ {
allow all;
}
Restarted nginx service, but still, I can't get it to work.
Did you try using an alias?
location ^~ /.well-known {
allow all;
auth_basic off;
alias /path/to/.well-known/;
}

Resources