React - bundle.js not found (using Nginx setup) - reactjs

I have a React app set up to run on port 8080. When I run the deployed project using http://example.com:8080, it runs well.
However, I'm using nginx to proxy this url, adding the "location /admins" to etc/nginx/nginx.conf:
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
root html;
index index.html index.htm;
}
location /admins {
proxy_pass "http://hadas-ex.co.il:8080";
}
Then, when browsing to hadas.ex.co.il/admins, it serves the app, but I get the following error in my console:
GET http://hadas-ex.co.il/static/js/bundle.js net::ERR_ABORTED 404 (Not Found)
I'm just confused as to why I'm getting this error, as it's working fine when accessing the hadas-ex.co.il:8080 directly.

Port 80 and Port 8080 are not the same. Ports are used to make connections unique and range from 0 to 65535 out of which upto 1024 are called well known ports which are reserved by convention to identify specific service types on a host. 80 is reserved for HTTP. Port 8080 is typically used for a personally hosted web server, when the ISP restricts this type of usage for non-commercial customers. Port 8080 is the just the default second choice for a webserver.
Resources:
Are-port-80-and-8080-the-same
Apache-httpd-vs-tomcat-7-port-80-vs-port-8080

Related

Nginx docker react ec2 https connection refused

Tried a number of different versions of nginx.conf, but nothing appears to be mitigate the classic connection refused page when I enter my https://domain.
It should be noted that the domain ends with .dev, wondering if this matters.
The domain was purchased on google domains, and there are A record mappings to a public EC2 instance that has the running nginx server (inside the docker container).
nginx.conf:
server {
listen 80;
server_name random.dev www.random.dev;
return 301 https://random.dev$request_uri;
}
server {
listen 443 default_server ssl;
server_name random.dev www.random.dev;
ssl on;
ssl_certificate /etc/ssl/certs/ssl-bundle.crt;
ssl_certificate_key /etc/ssl/private/private.key;
index index.html index.htm;
root /usr/share/nginx/html;
location / {
try_files $uri $uri/ /index.html;
}
}
Dockerfile:
FROM node:17.7.1 as builder
WORKDIR /usr/src/app
COPY package*.json /usr/src/app/
RUN npm install
COPY . /usr/src/app/
RUN npm run build
FROM nginx:latest
COPY --from=builder /usr/src/app/build /usr/share/nginx/html
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
COPY ./ssl /etc/ssl
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
docker-compose:
version: "3"
services:
ui:
image: <image>
ports:
- 80:80
- 443:443
EC2 instance has an Amazon Linux 2 flavor.
Security group mapping appears to be correct, with ssh (22), http (80), and https (443) accepting inbound from everywhere.
Network ACL is default (open to all, inbound and outbound).
After running docker-compose, I've also tried checking using netstat (inside ec2 outside docker) whether 80 and 443 were listening, and they were.
http on the raw IP (not domain) has worked when I commented out the 443 nginx conf code, but the domain does not work because .dev and .app automatically redirects to https on chrome (and firefox I believe).
Given this, wondering if anyone else faced any problems similar to mine. Is this an Amazon Linux 2 problem, or is it a .dev problem, or could it possibly be an ssl problem?
A series of changes were made, but the fix appears to have something to do with assigning an elastic IP address to the ec2 instance.
For some reason the local wifi I was using (Verizon) always led to a timeout request to the IP address on ping <ip-address>. However, on the other networks we tested (comcast as well as ATT mobile data), the IP address was not refused. This is likely to have something to do with the IP address being blacklisted by Verizon. I am not sure if all static IPs are blacklisted or we just got unlucky. The elastic IP that was assigned seemed to fix the issue.

ERR_CONNECTION_REFUSED AWS EC2 when performing GET to backend server

This is my first AWS deployment and I have what is going to be a simple question (but not for me). I would appreciate any help I can get.
I have a React frontend and a backend node server running on an AWS EC2 instance. I have no problem serving the front end to my browser from port 80 (NGINX server) on the public IP address for the EC2 instance but the GET request to the node server on port 3001 returns an error to the console "net::ERR_CONNECTION_REFUSED".
Troubleshooting so far;
confirmed NGINX and Node servers are running on their proper ports
I performed a curl request from the EC2 terminal (curl -X GET http://127.0.0.1:3001/api/users) to the backend server and the information is served successfully from the server/DB but when the request comes from running the app in the client, the connection is refused.
I made many changes to the NGINX .conf file (one at a time) including using the public IP vs using localhost (or even 127.0.0.1:3001) for the backend express server but with no success.
Made sure to restart the NGINX server to pick up .conf changes.
Since I am able to get a response when I use a "curl" request from the VM terminal but not when I request from the client, I wonder if it has something to do with my security group rules. I have Type "HTTPS" on port 443 and "HTTP" on port 80 with "0.0.0.0/0" and "::/0" on both and SSH on port 22 with "0.0.0.0/0". Is there anything that I am missing?
Here is the NGINX .conf info for the servers
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
#charset koi8-r;
#access_log logs/host.access.log main;
location /{
root /usr/share/nginx/html/aws-thought/client/build;
index index.html;
try_files $uri /index.html;
}
location /api/ {
proxy_pass http://127.0.0.1:3001;
}
}

Use nginx to serve static react files a remote server localhost

all! I am trying to follow this tutorial: https://www.youtube.com/watch?v=tGYNYPKTyno in order to use Nginx to serve my static react files.
In the video, the dev navigates to the IP address of the EC2 instance and the react application is served. I attempted to navigate to the IP address I believe to be for this particular server (ie the name of the server on the bash is like user#123-45-6-789) but I am met with a connection timeout error.
I then attempted to tunnel using putty to the server's port 80 and forwarding to my specific port (ie localhost:6000) but I similarly got a connection timeout error. I know my tunnels work (I can run my api and my react application using yarn build), so the tunnels are not at fault. Also, when I run netstat, I get that the local address 0.0.0.0:80 is currently in use by nginx.
My config file is as follows:
server {
listen 80;
listen [::]:80;
root /home/user/application/deployment/build;
location / {
try_files $uri /index.html;
}
}
Any and all advice would be appreciated!
-- Edit --
My nginx.conf file includes the
include /etc/nginx/conf.d/*.conf
as indicated in the video.
Friends, my current changes are I moved my files to a www folder in the var folder of the root and directed that to be the root folder. See config file below
server {
listen 3500;
server_name localhost;
location / {
root /var/www/appication/deployment/build;
index index.html;
}
}
I then used an ssh tunnel to connect to my localhost port 3500 and can now access it on my local computer. The reason I was not able to access the server by the IP address since it exists only in the private cloude. I am now moving on to the reverse proxying and will later connect this to a domain. Cheers!

Deploying Client and Server to the same VM

I have an application that has a React frontend and a Python Flask backend. The frontend communicates with the server to perform specific operations and the server api should only be used by the client.
I have deployed the whole application (Client and Server) to an Ubuntu virtual machine. The machine only has specific ports open to the public (5000, 443, 22). I have setup Nginx configuration and the frontend can be access from my browser via http://<ip:address>:5000. The server is running locally on a different port, 4000, which is not accessible to the public as designed.
The problem is when I access the client app and I navigate to the pages that communicate with the server via http://127.0.0.1:4000 from the react app, I get an error saying connection was refused.
GET http://127.0.0.1:4000/ net::ERR_CONNECTION_REFUSED on my browser.
When I ssh into the vm and run the same command through curl curl http://127.0.0.1:4000/, I get a response and everything works fine.
Is there a way I can deploy the server in the same vm such that when I access the client React App from my browser, the React App can access the server without problems?
So after tinkering with this, I found a solution using Nginx. Summary is you run the server locally and use a different port say 4000 (not exposed to public), then expose your react app on the exposed port in this case 5000.
Then use a proxy in your Nginx config that redirects any call starting with api to the local host server running. See config below
server {
#Exposed port and servername ipaddress
listen 5000;
server_name 1.2.3.4 mydomain.com;
#SSL
ssl on;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
ssl_protocols TLSv1.2;
#Link to the react build
root /var/www/html/build;
index index.html index.htm;
#Error and access logs for debugging
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
try_files $uri /index.html =404;
}
#Redirect any traffic beginning with /api to the flask server
location /api {
include proxy_params;
proxy_pass http://localhost:4000;
}
}
Now this means you need to have all your server endpoints begin with /api/... and the user can also access the endpoint from the browser via http://<ip:address>:5000/api/endpoint
You can mitigate this by having your client send a token the server and the server will not run any commands without that token/authorization.
I found the solution here and modified it to fit my specific need here
Part two of solution
Other series in the solution can be found Part one of solution and Part three of solution

How to host multiple dockerized websites (ngnix) in one ip address?

Here is my scenario:
1. I have an aws ec2 machine (coreOs)
2. I have hosted multiple APIs in that - all in docker containers
3. I have HA proxy listening to another port that listens to certain port (say 999) and load balances multiple APIs. Works perfectly ...
4. I have another ngnix container which hosts my angular site. This obviously listens to port 80. Assume it's mapped to http://pagladasu.com
What I want is create http://one.pagladasu.com and http://two.pagladasu.com and so forth. And want each pointing to different angular application in the docker containers.
Issue is - both need to listen to port 80 - so how to accomplish tha?
Create a container that listens on port 80 and runs Nginx. Configure Nginx with virtual hosts for each of your subdomains (one.pagladasu.com, two.pagladasu.com), using proxy_pass to send the connections to upstream angular containers. Something like this:
server {
listen 80;
server_name one.pagladasu.com;
location / {
proxy_pass http://one-pagladasu-com;
}
}
server {
listen 80;
server_name two.pagladasu.com;
location / {
proxy_pass http://two-pagladasu-com;
}
}
Link this Nginx container to the two angular containers. Docker will modify /etc/hosts for you so that you may refer to them by name. In this case I've assumed they are named like one-pagladasu-com but of course it can be anything.
Now the flow is Requests => Nginx virtual hosts container => Angular container => HAProxy => APIs.

Resources