Two separate file server available with one address - file

I have two separate file servers with different files inside.
For example:
Server 1:
file1.mp4
file2.mp4
Server 2.
file3.mp4
file4.mp4
What would be the easiest way to access the files with the same domain?
For example:
https://example.com/file1.mp4
https://example.com/file2.mp4
https://example.com/file3.mp4
https://example.com/file4.mp4

If using nginx you could use this config on each of your servers, for example for server1:
upstream failover{
server server2:8080;
}
server {
listen 80;
server_name example.com;
root /tmp/test;
location ~* \.(mp4)$ {
try_files $uri #failover;
}
location #failover {
proxy_pass http://failover;
}
}
In this example, for files ending in .mp4 if not found in the server they will use the #failover location, the one is going then to proxy the request to server via an upstream.
For server2 you do the same but just change the address in the upstream, for example:
upstream failover {
server server1:8080
}
In any case, if the file .mp4 is not found in either of the servers you still getting a 404 HTTP status code.

Related

ERR_CONNECTION_REFUSED AWS EC2 when performing GET to backend server

This is my first AWS deployment and I have what is going to be a simple question (but not for me). I would appreciate any help I can get.
I have a React frontend and a backend node server running on an AWS EC2 instance. I have no problem serving the front end to my browser from port 80 (NGINX server) on the public IP address for the EC2 instance but the GET request to the node server on port 3001 returns an error to the console "net::ERR_CONNECTION_REFUSED".
Troubleshooting so far;
confirmed NGINX and Node servers are running on their proper ports
I performed a curl request from the EC2 terminal (curl -X GET http://127.0.0.1:3001/api/users) to the backend server and the information is served successfully from the server/DB but when the request comes from running the app in the client, the connection is refused.
I made many changes to the NGINX .conf file (one at a time) including using the public IP vs using localhost (or even 127.0.0.1:3001) for the backend express server but with no success.
Made sure to restart the NGINX server to pick up .conf changes.
Since I am able to get a response when I use a "curl" request from the VM terminal but not when I request from the client, I wonder if it has something to do with my security group rules. I have Type "HTTPS" on port 443 and "HTTP" on port 80 with "0.0.0.0/0" and "::/0" on both and SSH on port 22 with "0.0.0.0/0". Is there anything that I am missing?
Here is the NGINX .conf info for the servers
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
#charset koi8-r;
#access_log logs/host.access.log main;
location /{
root /usr/share/nginx/html/aws-thought/client/build;
index index.html;
try_files $uri /index.html;
}
location /api/ {
proxy_pass http://127.0.0.1:3001;
}
}

Use nginx to serve static react files a remote server localhost

all! I am trying to follow this tutorial: https://www.youtube.com/watch?v=tGYNYPKTyno in order to use Nginx to serve my static react files.
In the video, the dev navigates to the IP address of the EC2 instance and the react application is served. I attempted to navigate to the IP address I believe to be for this particular server (ie the name of the server on the bash is like user#123-45-6-789) but I am met with a connection timeout error.
I then attempted to tunnel using putty to the server's port 80 and forwarding to my specific port (ie localhost:6000) but I similarly got a connection timeout error. I know my tunnels work (I can run my api and my react application using yarn build), so the tunnels are not at fault. Also, when I run netstat, I get that the local address 0.0.0.0:80 is currently in use by nginx.
My config file is as follows:
server {
listen 80;
listen [::]:80;
root /home/user/application/deployment/build;
location / {
try_files $uri /index.html;
}
}
Any and all advice would be appreciated!
-- Edit --
My nginx.conf file includes the
include /etc/nginx/conf.d/*.conf
as indicated in the video.
Friends, my current changes are I moved my files to a www folder in the var folder of the root and directed that to be the root folder. See config file below
server {
listen 3500;
server_name localhost;
location / {
root /var/www/appication/deployment/build;
index index.html;
}
}
I then used an ssh tunnel to connect to my localhost port 3500 and can now access it on my local computer. The reason I was not able to access the server by the IP address since it exists only in the private cloude. I am now moving on to the reverse proxying and will later connect this to a domain. Cheers!

Nginx - have two servers running on one .conf file connected to the same port

I created a configuration file called nginxserver.conf to run two servers from the same server like so:
server {
listen 83;
listen [::]:83;
server_name u.myproject.com;
location / {
return 301 http://0.0.0.0:5000;
}
}
server {
listen 83;
listen [::]:83;
server_name t.myproject.com;
location / {
return 301 http://0.0.0.0:5001;
}
}
While a previous configuration file that allowed me to connect to each server from different ports worked, this version of the configuration file only seems to allow me to connect to http://0.0.0.0:5001. How do I fix this configuration file in order to let me use the servername of each server to connect to the respective ip address of each server while having both nginx servers run from the same port?
Edit, I have change the configuration file to say this:
server {
listen 83;
listen [::]:83;
server_name u.myproject.com;
location / {
return 301 http://127.0.0.1:5000;
}
}
server {
listen 83;
listen [::]:83;
server_name t.myproject.com;
location / {
return 301 http://127.0.0.1:5001;
}
}
However, typing u.myproject.com:83 or t.myproject.com:83 into the browser just gives me an error message saying Hmm. We're having trouble finding that site. We can't connect to the server at . in my browser, so the issue still isn't fixed.

Deploying Client and Server to the same VM

I have an application that has a React frontend and a Python Flask backend. The frontend communicates with the server to perform specific operations and the server api should only be used by the client.
I have deployed the whole application (Client and Server) to an Ubuntu virtual machine. The machine only has specific ports open to the public (5000, 443, 22). I have setup Nginx configuration and the frontend can be access from my browser via http://<ip:address>:5000. The server is running locally on a different port, 4000, which is not accessible to the public as designed.
The problem is when I access the client app and I navigate to the pages that communicate with the server via http://127.0.0.1:4000 from the react app, I get an error saying connection was refused.
GET http://127.0.0.1:4000/ net::ERR_CONNECTION_REFUSED on my browser.
When I ssh into the vm and run the same command through curl curl http://127.0.0.1:4000/, I get a response and everything works fine.
Is there a way I can deploy the server in the same vm such that when I access the client React App from my browser, the React App can access the server without problems?
So after tinkering with this, I found a solution using Nginx. Summary is you run the server locally and use a different port say 4000 (not exposed to public), then expose your react app on the exposed port in this case 5000.
Then use a proxy in your Nginx config that redirects any call starting with api to the local host server running. See config below
server {
#Exposed port and servername ipaddress
listen 5000;
server_name 1.2.3.4 mydomain.com;
#SSL
ssl on;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
ssl_protocols TLSv1.2;
#Link to the react build
root /var/www/html/build;
index index.html index.htm;
#Error and access logs for debugging
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
try_files $uri /index.html =404;
}
#Redirect any traffic beginning with /api to the flask server
location /api {
include proxy_params;
proxy_pass http://localhost:4000;
}
}
Now this means you need to have all your server endpoints begin with /api/... and the user can also access the endpoint from the browser via http://<ip:address>:5000/api/endpoint
You can mitigate this by having your client send a token the server and the server will not run any commands without that token/authorization.
I found the solution here and modified it to fit my specific need here
Part two of solution
Other series in the solution can be found Part one of solution and Part three of solution

Using nginx to run dynamic content

I wanna use nginx to run my node.js application. I created a build of the application and inside my nginx.conf I set the root to point to the location of the build folder. This worked and my application ran successfully on nginx.
Now I'm wondering if I could serve dynamic content directly through nginx. Like how I would get the app running with npm start can I do something similar with nginx instead of using the build(static) files?
You need a reverse proxy.
In your application. Configure your server to run on an internal port. For example 3000.
Then configure nginx to proxy connections to your app. Here's a simple nginx configuration to do just that:
root /path/to/app/build;
# Handle static content
location ^~ /static {
try_files $uri $uri/ =404;
}
# Handle dynamic content
location / {
proxy_pass http://127.0.0.1:3000;
}
Or, if you prefer, you can invert the URL scheme to default to static files:
root /path/to/app/build;
# Handle dynamic content
location ^~ /api {
proxy_pass http://127.0.0.1:3000;
}
# Handle static content
location / {
try_files $uri $uri/ =404;
}
Why do something like this?
There are several reasons to use an nginx front-end instead of setting your server to serve directly on port 80.
Nginx can server static content much faster than Express.static or other node static server.
Nginx can act as a load-balancer when you want to scale your server.
Nginx has been battle-tested on the internet so most security issues has been fixed or is well known. In comparison, express or http.server are just libraries and you are the person responsible for your application's security.
Nginx is a bit faster at serving HTTPS compared to node. So you can develop a plain-old HTTP server in node and let nginx handle encryption.

Resources