Hi i am new in configuring nginx server and i have a problem:
I have two web applications created with ReactJS which I will call app1 and app2.
I'd like to show app1 on www.mydomain.com
and app2 on www.mydomain.com/app2
it's possible?
This is my Nginx file configuration /etc/nginx/sites-available/default
server {
index index.html index.htm index.nginx-debian.html;
server_name xxx.com www.xxx.com;
location / {
root /var/www/app1/build;;
}
location /app2{
alias /var/www/app2/build;;
}
Using this configuration file if I connect to www.domain.com App1 works correctly while if I connect to www.domain.com/app2 the page is completely white and the console gives me these errors:
<noscript>You need to enable JavaScript to run this app.</noscript>
2.a90b013e.chunk.js:1 Failed to load resource: the server responded with a status of 404 (Not Found) main.bbc61316.chunk.js:1 Failed to
load resource: the server responded with a status of 404 (Not Found)
Related
I have two domains,
zerp.io (ssl installed)
app.zerp.io (only http)
in zerp.io (main domain) a wordpress website is hosted and is working fine. I am trying to deploy a React app on app.zerp.io using nginx. I deleted the default file and created new file app.zerp.io at /etc/nginx/sites-available/ I also created same file at /etc/nginx/sites-enabled/ and created a symlink between them. I checked the DNS entry, app.zerp.io and www.app.zerp.io is pointing to the public Ip of the correct server where React App resides.
Here's my /etc/nginx/sites-available/app.zerp.io file
server {
listen 80;
index index.html index.htm index.nginx-debian.html;
server_name www.app.zerp.io app.zerp.io;
location / {
proxy_pass localhost:3000;
proxy_ser_header host $host;
}
}
The problem is, whenever I try to reach http://app.zerp.io through web browser it redirects me to https://zerp.io. Here's what I did so far,
I checked DNS using an online tool, its correctly pointing to the server
I did not use any 301 redirects in the configuration file as you can see above
when I try curl app.zerp.io from the production server (in Germany), sometimes it gives 200 with correct response and sometimes it gives 301 (moved permanently) crazy isn't it
When I try curl app.zerp.io from my local computer it always give me 301 although I do not have any 301 in my nginx config file
I thought, may be its a cache issue on my chrome, to my surprise no, I cleared the cache and hard reload, I even tried incognito mode with no success, it always redirect me to https://zerp.io
When I try curl app.zerp.io from my local computer using a VPS it correctly opens the website app.zerp.io.
I do not have any ssl certificate so there are not redirects from http to https in http://app.zerp.io
Its been two days, Its making me crazy, I am assuming it has something to do with DNS resolution. Can some please help me out
I have a Django project that I have already successfully deployed on my Ubuntu 18.04 server via gunicorn and nginx using this tutorial.
https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-18-04
The project uses Django Rest Framework and I'm able to access it's endpoints via a web browser. However, I would also like to deploy a separate react project on the same server, so that it can send http requests to the Django app and display data received from the REST API. How can I go about doing this?
Here is my current gunicorn.service
[Unit]
Description=gunicorn daemon
Requires=gunicorn.socket
After=network.target
[Service]
User=ubuntu
Group=www-data
WorkingDirectory=/home/ubuntu/my_project/coffeebrewer
ExecStart=/home/ubuntu/my_project/venv/bin/gunicorn --access-logfile - --workers 3 --bind unix:/home/ubuntu/my_project/coffeebrewer/coffeebrewer.sock coffeebrewer.wsgi:application
[Install]
WantedBy=multi-user.target
And here are my current nginx configurations
server {
listen 80;
listen [::]:80;
server_name my_ipv6_address;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root root /home/ubuntu/my_project/coffeebrewer;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
}
I recommend that you try it locally first as a production mode by installing whitenoise django package and in your settings.py file add this line SECURE_CROSS_ORIGIN_OPENER_POLICY = None
From there, you can navigate forward
I have an application that has a React frontend and a Python Flask backend. The frontend communicates with the server to perform specific operations and the server api should only be used by the client.
I have deployed the whole application (Client and Server) to an Ubuntu virtual machine. The machine only has specific ports open to the public (5000, 443, 22). I have setup Nginx configuration and the frontend can be access from my browser via http://<ip:address>:5000. The server is running locally on a different port, 4000, which is not accessible to the public as designed.
The problem is when I access the client app and I navigate to the pages that communicate with the server via http://127.0.0.1:4000 from the react app, I get an error saying connection was refused.
GET http://127.0.0.1:4000/ net::ERR_CONNECTION_REFUSED on my browser.
When I ssh into the vm and run the same command through curl curl http://127.0.0.1:4000/, I get a response and everything works fine.
Is there a way I can deploy the server in the same vm such that when I access the client React App from my browser, the React App can access the server without problems?
So after tinkering with this, I found a solution using Nginx. Summary is you run the server locally and use a different port say 4000 (not exposed to public), then expose your react app on the exposed port in this case 5000.
Then use a proxy in your Nginx config that redirects any call starting with api to the local host server running. See config below
server {
#Exposed port and servername ipaddress
listen 5000;
server_name 1.2.3.4 mydomain.com;
#SSL
ssl on;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
ssl_protocols TLSv1.2;
#Link to the react build
root /var/www/html/build;
index index.html index.htm;
#Error and access logs for debugging
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
try_files $uri /index.html =404;
}
#Redirect any traffic beginning with /api to the flask server
location /api {
include proxy_params;
proxy_pass http://localhost:4000;
}
}
Now this means you need to have all your server endpoints begin with /api/... and the user can also access the endpoint from the browser via http://<ip:address>:5000/api/endpoint
You can mitigate this by having your client send a token the server and the server will not run any commands without that token/authorization.
I found the solution here and modified it to fit my specific need here
Part two of solution
Other series in the solution can be found Part one of solution and Part three of solution
I want to fetch some info but when I try to implement this to server (Ubuntu 18.04) with Nginx I can't fetch...
Put certificate to enable HTTPS to my domain.
Create a .env with a variable that contains the complete url to API (Because Im using a proxy in development)
Put some headers to the petition
Try to change the config in nginx
But nothing... my application only works running in localhost
axios.get(process.env.REACT_APP_API_URL) ...
The console of the browser (Safari):
Origin https://mysubdomain.com is not allowed by Access-Control-Allow-Origin.
XMLHttpRequest cannot load https://mysubdomain.com due to access control checks.
Failed to load resource: Origin https://mysubdomain.com is not allowed by Access-Control-Allow-Origin.
You Server needs to return below header value
Access-Control-Allow-Origin: *
which means anyone can connect to API.
Work Around
Go to chrome folder.
chrome.exe --user-data-dir="<Some directory name to store temporary chrome data>" --disable-web-security
I'm not expert in nginx but this works!
I edit my site file in /etc/nginx/sites-available/mysite like this:
location /anyAppLocation/ {
proxy_method GET;
proxy_pass_request_headers on;
proxy_pass https://api.site.com;
proxy_redirect default;
}
I have a REST back-end service located on some server and front-end application made in angular.
I'm using Angular CLI to build application. Location of my back-end server is located in environment file.
Requirement for my app is that I'm providing two docker images. One with my back-end server (Java Spring Boot app) and the second one is static html build with ng build myApp command. Then i copy content od dist directory to proper directory on docker image as is shown here Nginx docker image.
Problem is, that back-end and front-end may work on different servers. Is there any way i can configure my front-end app that i can change back-end server location as per start of container?
I know this is an old question, but I faced the exact same problem and it took me a while to solve. May this be of help to those coming from search engines.
I found 2 solutions (I ended up choosing the second one). Both allow you to use environment variables in docker to configure your API URL.
Solution 1 ("client"-side): env.js asset + sed
The idea is to have your Angular client load an env.js file from your HTTP server. This env.js will contain the API URL, and will be modifiable by your container when it starts. This is what you discussed in the question comments.
Add an env.js in your angular app assets folder (src/assets for me with angular-cli):
var MY_APP_ENV = {
apiUrl: 'http://localhost:9400',
}
In your index.html, you will load your env:
<head>
<meta charset="utf-8">
<base href="/">
<script src="env.js"></script>
</head>
In your environment.ts, you can use the variable:
declare var MY_APP_ENV: any;
export const environment = {
production: false,
apiUrl: MY_APP_ENV.apiUrl
};
In your NGINX Dockerfile do:
FROM nginx:1.11-alpine
COPY tmp/dist /usr/share/nginx/html
COPY run.sh /run.sh
CMD ["sh", "/run.sh"]
The run.sh script is where the sed magic happens:
#!/bin/sh
# API
/bin/sed -i "s|http://localhost:9400|${MY_API_URL}|" /usr/share/nginx/html/env.js
nginx -g 'daemon off;'
In your angular services, use environment.apiUrl to connect to the API (you need to import environment, see Angular 2 docs).
Solution 2 (purely server side): nginx proxy config + envsubst
I wasn't happy with the previous solution because the API URL needed to be from the host point of view, it couldn't use another container hostname in my docker-compose setup.
So I thought: many people use NGINX as a proxy server, why not proxy /api to my other container this way.
Dockerfile:
FROM nginx:1.11-alpine
COPY tmp/dist /usr/share/nginx/html
COPY frontend.conf.template /etc/nginx/conf.d/frontend.conf.template
COPY run.sh /run.sh
CMD ["/bin/sh", "/run.sh"]
frontend.conf.template:
server {
listen 80;
server_name myserver;
# API Server
location /api/ {
proxy_pass ${MY_API_URL}/;
}
# Main
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri$args $uri$args/ /index.html;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
run.sh:
#!/bin/sh
# Substitute env vars
envsubst '$MY_API_URL' \
< /etc/nginx/conf.d/frontend.conf.template \
> /etc/nginx/conf.d/default.conf
# Start server
nginx -g 'daemon off;'
envsubt allows you to substitute environment variables in a string with a shell-like syntax.
Then use /api/xyz to connect to the API from the Angular app.
I think the second solution is much cleaner. The API URL can be the API docker container name in a docker-compose setup, which is nice. The client is not involved, it is transparent. However, it depends on NGINX.