Wildfly for backend and NGINX for frontend - reactjs

I am trying to deploy an entire application that is build in ReactJS Frontend and Spring Backend. The backend that is serving APIs is already deployed in the server using WildFly.
My question is can I install NGINX on the same server to host the ReactJS frontend?

Yes, you can install NGINX and WildFly on the same server.
In such a scenario, typically NGINX is configured as 'reverse proxy'.
For example, when your WildFly is listening on port 8080, you make an NGINX configuration like:
server {
listen 80;
server_name _;
index index.html;
location / {
root /path/to/var/www/yourSite;
}
location /YourAPIRoot/ {
proxy_pass http://localhost:8080/YourAPIRoot/;
}
}
See also
Nginx both serving static files and reverse proxy a gunicorn server - serverfault
Part 2.2 - Install Nginx and configure it as a reverse proxy server - Microsoft Docs
Module ngx_http_proxy_module - nginx.org

Related

ERR_CONNECTION_REFUSED AWS EC2 when performing GET to backend server

This is my first AWS deployment and I have what is going to be a simple question (but not for me). I would appreciate any help I can get.
I have a React frontend and a backend node server running on an AWS EC2 instance. I have no problem serving the front end to my browser from port 80 (NGINX server) on the public IP address for the EC2 instance but the GET request to the node server on port 3001 returns an error to the console "net::ERR_CONNECTION_REFUSED".
Troubleshooting so far;
confirmed NGINX and Node servers are running on their proper ports
I performed a curl request from the EC2 terminal (curl -X GET http://127.0.0.1:3001/api/users) to the backend server and the information is served successfully from the server/DB but when the request comes from running the app in the client, the connection is refused.
I made many changes to the NGINX .conf file (one at a time) including using the public IP vs using localhost (or even 127.0.0.1:3001) for the backend express server but with no success.
Made sure to restart the NGINX server to pick up .conf changes.
Since I am able to get a response when I use a "curl" request from the VM terminal but not when I request from the client, I wonder if it has something to do with my security group rules. I have Type "HTTPS" on port 443 and "HTTP" on port 80 with "0.0.0.0/0" and "::/0" on both and SSH on port 22 with "0.0.0.0/0". Is there anything that I am missing?
Here is the NGINX .conf info for the servers
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
#charset koi8-r;
#access_log logs/host.access.log main;
location /{
root /usr/share/nginx/html/aws-thought/client/build;
index index.html;
try_files $uri /index.html;
}
location /api/ {
proxy_pass http://127.0.0.1:3001;
}
}

How to deploy a Django and React project on the same Ubuntu 18.04 server using gunicorn and nginx?

I have a Django project that I have already successfully deployed on my Ubuntu 18.04 server via gunicorn and nginx using this tutorial.
https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-18-04
The project uses Django Rest Framework and I'm able to access it's endpoints via a web browser. However, I would also like to deploy a separate react project on the same server, so that it can send http requests to the Django app and display data received from the REST API. How can I go about doing this?
Here is my current gunicorn.service
[Unit]
Description=gunicorn daemon
Requires=gunicorn.socket
After=network.target
[Service]
User=ubuntu
Group=www-data
WorkingDirectory=/home/ubuntu/my_project/coffeebrewer
ExecStart=/home/ubuntu/my_project/venv/bin/gunicorn --access-logfile - --workers 3 --bind unix:/home/ubuntu/my_project/coffeebrewer/coffeebrewer.sock coffeebrewer.wsgi:application
[Install]
WantedBy=multi-user.target
And here are my current nginx configurations
server {
listen 80;
listen [::]:80;
server_name my_ipv6_address;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root root /home/ubuntu/my_project/coffeebrewer;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
}
I recommend that you try it locally first as a production mode by installing whitenoise django package and in your settings.py file add this line SECURE_CROSS_ORIGIN_OPENER_POLICY = None
From there, you can navigate forward

Deploying Client and Server to the same VM

I have an application that has a React frontend and a Python Flask backend. The frontend communicates with the server to perform specific operations and the server api should only be used by the client.
I have deployed the whole application (Client and Server) to an Ubuntu virtual machine. The machine only has specific ports open to the public (5000, 443, 22). I have setup Nginx configuration and the frontend can be access from my browser via http://<ip:address>:5000. The server is running locally on a different port, 4000, which is not accessible to the public as designed.
The problem is when I access the client app and I navigate to the pages that communicate with the server via http://127.0.0.1:4000 from the react app, I get an error saying connection was refused.
GET http://127.0.0.1:4000/ net::ERR_CONNECTION_REFUSED on my browser.
When I ssh into the vm and run the same command through curl curl http://127.0.0.1:4000/, I get a response and everything works fine.
Is there a way I can deploy the server in the same vm such that when I access the client React App from my browser, the React App can access the server without problems?
So after tinkering with this, I found a solution using Nginx. Summary is you run the server locally and use a different port say 4000 (not exposed to public), then expose your react app on the exposed port in this case 5000.
Then use a proxy in your Nginx config that redirects any call starting with api to the local host server running. See config below
server {
#Exposed port and servername ipaddress
listen 5000;
server_name 1.2.3.4 mydomain.com;
#SSL
ssl on;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
ssl_protocols TLSv1.2;
#Link to the react build
root /var/www/html/build;
index index.html index.htm;
#Error and access logs for debugging
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
try_files $uri /index.html =404;
}
#Redirect any traffic beginning with /api to the flask server
location /api {
include proxy_params;
proxy_pass http://localhost:4000;
}
}
Now this means you need to have all your server endpoints begin with /api/... and the user can also access the endpoint from the browser via http://<ip:address>:5000/api/endpoint
You can mitigate this by having your client send a token the server and the server will not run any commands without that token/authorization.
I found the solution here and modified it to fit my specific need here
Part two of solution
Other series in the solution can be found Part one of solution and Part three of solution

Configuration of Angular2 application - nginx and docker

I have a REST back-end service located on some server and front-end application made in angular.
I'm using Angular CLI to build application. Location of my back-end server is located in environment file.
Requirement for my app is that I'm providing two docker images. One with my back-end server (Java Spring Boot app) and the second one is static html build with ng build myApp command. Then i copy content od dist directory to proper directory on docker image as is shown here Nginx docker image.
Problem is, that back-end and front-end may work on different servers. Is there any way i can configure my front-end app that i can change back-end server location as per start of container?
I know this is an old question, but I faced the exact same problem and it took me a while to solve. May this be of help to those coming from search engines.
I found 2 solutions (I ended up choosing the second one). Both allow you to use environment variables in docker to configure your API URL.
Solution 1 ("client"-side): env.js asset + sed
The idea is to have your Angular client load an env.js file from your HTTP server. This env.js will contain the API URL, and will be modifiable by your container when it starts. This is what you discussed in the question comments.
Add an env.js in your angular app assets folder (src/assets for me with angular-cli):
var MY_APP_ENV = {
apiUrl: 'http://localhost:9400',
}
In your index.html, you will load your env:
<head>
<meta charset="utf-8">
<base href="/">
<script src="env.js"></script>
</head>
In your environment.ts, you can use the variable:
declare var MY_APP_ENV: any;
export const environment = {
production: false,
apiUrl: MY_APP_ENV.apiUrl
};
In your NGINX Dockerfile do:
FROM nginx:1.11-alpine
COPY tmp/dist /usr/share/nginx/html
COPY run.sh /run.sh
CMD ["sh", "/run.sh"]
The run.sh script is where the sed magic happens:
#!/bin/sh
# API
/bin/sed -i "s|http://localhost:9400|${MY_API_URL}|" /usr/share/nginx/html/env.js
nginx -g 'daemon off;'
In your angular services, use environment.apiUrl to connect to the API (you need to import environment, see Angular 2 docs).
Solution 2 (purely server side): nginx proxy config + envsubst
I wasn't happy with the previous solution because the API URL needed to be from the host point of view, it couldn't use another container hostname in my docker-compose setup.
So I thought: many people use NGINX as a proxy server, why not proxy /api to my other container this way.
Dockerfile:
FROM nginx:1.11-alpine
COPY tmp/dist /usr/share/nginx/html
COPY frontend.conf.template /etc/nginx/conf.d/frontend.conf.template
COPY run.sh /run.sh
CMD ["/bin/sh", "/run.sh"]
frontend.conf.template:
server {
listen 80;
server_name myserver;
# API Server
location /api/ {
proxy_pass ${MY_API_URL}/;
}
# Main
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri$args $uri$args/ /index.html;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
run.sh:
#!/bin/sh
# Substitute env vars
envsubst '$MY_API_URL' \
< /etc/nginx/conf.d/frontend.conf.template \
> /etc/nginx/conf.d/default.conf
# Start server
nginx -g 'daemon off;'
envsubt allows you to substitute environment variables in a string with a shell-like syntax.
Then use /api/xyz to connect to the API from the Angular app.
I think the second solution is much cleaner. The API URL can be the API docker container name in a docker-compose setup, which is nice. The client is not involved, it is transparent. However, it depends on NGINX.

How to host multiple dockerized websites (ngnix) in one ip address?

Here is my scenario:
1. I have an aws ec2 machine (coreOs)
2. I have hosted multiple APIs in that - all in docker containers
3. I have HA proxy listening to another port that listens to certain port (say 999) and load balances multiple APIs. Works perfectly ...
4. I have another ngnix container which hosts my angular site. This obviously listens to port 80. Assume it's mapped to http://pagladasu.com
What I want is create http://one.pagladasu.com and http://two.pagladasu.com and so forth. And want each pointing to different angular application in the docker containers.
Issue is - both need to listen to port 80 - so how to accomplish tha?
Create a container that listens on port 80 and runs Nginx. Configure Nginx with virtual hosts for each of your subdomains (one.pagladasu.com, two.pagladasu.com), using proxy_pass to send the connections to upstream angular containers. Something like this:
server {
listen 80;
server_name one.pagladasu.com;
location / {
proxy_pass http://one-pagladasu-com;
}
}
server {
listen 80;
server_name two.pagladasu.com;
location / {
proxy_pass http://two-pagladasu-com;
}
}
Link this Nginx container to the two angular containers. Docker will modify /etc/hosts for you so that you may refer to them by name. In this case I've assumed they are named like one-pagladasu-com but of course it can be anything.
Now the flow is Requests => Nginx virtual hosts container => Angular container => HAProxy => APIs.

Resources