Use nginx to serve static react files a remote server localhost - reactjs

all! I am trying to follow this tutorial: https://www.youtube.com/watch?v=tGYNYPKTyno in order to use Nginx to serve my static react files.
In the video, the dev navigates to the IP address of the EC2 instance and the react application is served. I attempted to navigate to the IP address I believe to be for this particular server (ie the name of the server on the bash is like user#123-45-6-789) but I am met with a connection timeout error.
I then attempted to tunnel using putty to the server's port 80 and forwarding to my specific port (ie localhost:6000) but I similarly got a connection timeout error. I know my tunnels work (I can run my api and my react application using yarn build), so the tunnels are not at fault. Also, when I run netstat, I get that the local address 0.0.0.0:80 is currently in use by nginx.
My config file is as follows:
server {
listen 80;
listen [::]:80;
root /home/user/application/deployment/build;
location / {
try_files $uri /index.html;
}
}
Any and all advice would be appreciated!
-- Edit --
My nginx.conf file includes the
include /etc/nginx/conf.d/*.conf
as indicated in the video.

Friends, my current changes are I moved my files to a www folder in the var folder of the root and directed that to be the root folder. See config file below
server {
listen 3500;
server_name localhost;
location / {
root /var/www/appication/deployment/build;
index index.html;
}
}
I then used an ssh tunnel to connect to my localhost port 3500 and can now access it on my local computer. The reason I was not able to access the server by the IP address since it exists only in the private cloude. I am now moving on to the reverse proxying and will later connect this to a domain. Cheers!

Related

Nginx is redirecting subdomain to main domain

I have two domains,
zerp.io (ssl installed)
app.zerp.io (only http)
in zerp.io (main domain) a wordpress website is hosted and is working fine. I am trying to deploy a React app on app.zerp.io using nginx. I deleted the default file and created new file app.zerp.io at /etc/nginx/sites-available/ I also created same file at /etc/nginx/sites-enabled/ and created a symlink between them. I checked the DNS entry, app.zerp.io and www.app.zerp.io is pointing to the public Ip of the correct server where React App resides.
Here's my /etc/nginx/sites-available/app.zerp.io file
server {
listen 80;
index index.html index.htm index.nginx-debian.html;
server_name www.app.zerp.io app.zerp.io;
location / {
proxy_pass localhost:3000;
proxy_ser_header host $host;
}
}
The problem is, whenever I try to reach http://app.zerp.io through web browser it redirects me to https://zerp.io. Here's what I did so far,
I checked DNS using an online tool, its correctly pointing to the server
I did not use any 301 redirects in the configuration file as you can see above
when I try curl app.zerp.io from the production server (in Germany), sometimes it gives 200 with correct response and sometimes it gives 301 (moved permanently) crazy isn't it
When I try curl app.zerp.io from my local computer it always give me 301 although I do not have any 301 in my nginx config file
I thought, may be its a cache issue on my chrome, to my surprise no, I cleared the cache and hard reload, I even tried incognito mode with no success, it always redirect me to https://zerp.io
When I try curl app.zerp.io from my local computer using a VPS it correctly opens the website app.zerp.io.
I do not have any ssl certificate so there are not redirects from http to https in http://app.zerp.io
Its been two days, Its making me crazy, I am assuming it has something to do with DNS resolution. Can some please help me out

ERR_CONNECTION_REFUSED AWS EC2 when performing GET to backend server

This is my first AWS deployment and I have what is going to be a simple question (but not for me). I would appreciate any help I can get.
I have a React frontend and a backend node server running on an AWS EC2 instance. I have no problem serving the front end to my browser from port 80 (NGINX server) on the public IP address for the EC2 instance but the GET request to the node server on port 3001 returns an error to the console "net::ERR_CONNECTION_REFUSED".
Troubleshooting so far;
confirmed NGINX and Node servers are running on their proper ports
I performed a curl request from the EC2 terminal (curl -X GET http://127.0.0.1:3001/api/users) to the backend server and the information is served successfully from the server/DB but when the request comes from running the app in the client, the connection is refused.
I made many changes to the NGINX .conf file (one at a time) including using the public IP vs using localhost (or even 127.0.0.1:3001) for the backend express server but with no success.
Made sure to restart the NGINX server to pick up .conf changes.
Since I am able to get a response when I use a "curl" request from the VM terminal but not when I request from the client, I wonder if it has something to do with my security group rules. I have Type "HTTPS" on port 443 and "HTTP" on port 80 with "0.0.0.0/0" and "::/0" on both and SSH on port 22 with "0.0.0.0/0". Is there anything that I am missing?
Here is the NGINX .conf info for the servers
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
#charset koi8-r;
#access_log logs/host.access.log main;
location /{
root /usr/share/nginx/html/aws-thought/client/build;
index index.html;
try_files $uri /index.html;
}
location /api/ {
proxy_pass http://127.0.0.1:3001;
}
}

Deploying Client and Server to the same VM

I have an application that has a React frontend and a Python Flask backend. The frontend communicates with the server to perform specific operations and the server api should only be used by the client.
I have deployed the whole application (Client and Server) to an Ubuntu virtual machine. The machine only has specific ports open to the public (5000, 443, 22). I have setup Nginx configuration and the frontend can be access from my browser via http://<ip:address>:5000. The server is running locally on a different port, 4000, which is not accessible to the public as designed.
The problem is when I access the client app and I navigate to the pages that communicate with the server via http://127.0.0.1:4000 from the react app, I get an error saying connection was refused.
GET http://127.0.0.1:4000/ net::ERR_CONNECTION_REFUSED on my browser.
When I ssh into the vm and run the same command through curl curl http://127.0.0.1:4000/, I get a response and everything works fine.
Is there a way I can deploy the server in the same vm such that when I access the client React App from my browser, the React App can access the server without problems?
So after tinkering with this, I found a solution using Nginx. Summary is you run the server locally and use a different port say 4000 (not exposed to public), then expose your react app on the exposed port in this case 5000.
Then use a proxy in your Nginx config that redirects any call starting with api to the local host server running. See config below
server {
#Exposed port and servername ipaddress
listen 5000;
server_name 1.2.3.4 mydomain.com;
#SSL
ssl on;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
ssl_protocols TLSv1.2;
#Link to the react build
root /var/www/html/build;
index index.html index.htm;
#Error and access logs for debugging
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
try_files $uri /index.html =404;
}
#Redirect any traffic beginning with /api to the flask server
location /api {
include proxy_params;
proxy_pass http://localhost:4000;
}
}
Now this means you need to have all your server endpoints begin with /api/... and the user can also access the endpoint from the browser via http://<ip:address>:5000/api/endpoint
You can mitigate this by having your client send a token the server and the server will not run any commands without that token/authorization.
I found the solution here and modified it to fit my specific need here
Part two of solution
Other series in the solution can be found Part one of solution and Part three of solution

React - bundle.js not found (using Nginx setup)

I have a React app set up to run on port 8080. When I run the deployed project using http://example.com:8080, it runs well.
However, I'm using nginx to proxy this url, adding the "location /admins" to etc/nginx/nginx.conf:
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
root html;
index index.html index.htm;
}
location /admins {
proxy_pass "http://hadas-ex.co.il:8080";
}
Then, when browsing to hadas.ex.co.il/admins, it serves the app, but I get the following error in my console:
GET http://hadas-ex.co.il/static/js/bundle.js net::ERR_ABORTED 404 (Not Found)
I'm just confused as to why I'm getting this error, as it's working fine when accessing the hadas-ex.co.il:8080 directly.
Port 80 and Port 8080 are not the same. Ports are used to make connections unique and range from 0 to 65535 out of which upto 1024 are called well known ports which are reserved by convention to identify specific service types on a host. 80 is reserved for HTTP. Port 8080 is typically used for a personally hosted web server, when the ISP restricts this type of usage for non-commercial customers. Port 8080 is the just the default second choice for a webserver.
Resources:
Are-port-80-and-8080-the-same
Apache-httpd-vs-tomcat-7-port-80-vs-port-8080

Two separate file server available with one address

I have two separate file servers with different files inside.
For example:
Server 1:
file1.mp4
file2.mp4
Server 2.
file3.mp4
file4.mp4
What would be the easiest way to access the files with the same domain?
For example:
https://example.com/file1.mp4
https://example.com/file2.mp4
https://example.com/file3.mp4
https://example.com/file4.mp4
If using nginx you could use this config on each of your servers, for example for server1:
upstream failover{
server server2:8080;
}
server {
listen 80;
server_name example.com;
root /tmp/test;
location ~* \.(mp4)$ {
try_files $uri #failover;
}
location #failover {
proxy_pass http://failover;
}
}
In this example, for files ending in .mp4 if not found in the server they will use the #failover location, the one is going then to proxy the request to server via an upstream.
For server2 you do the same but just change the address in the upstream, for example:
upstream failover {
server server1:8080
}
In any case, if the file .mp4 is not found in either of the servers you still getting a 404 HTTP status code.

Resources