just tomcat It works, but not working with nginx....
When I request it with http: // localhost: port in a Tomcat environment it works fine. However, in the nginx reverse proxy environment there are no errors and file down does not work.
this is spring java service code mapped to url /excel/clientSampleDownload
//service code
public void downloadSampleExcelFileTms(HttpServletRequest request, HttpServletResponse response, Locale locale)throws Exception {
...
SXSSFWorkbook wb = new SXSSFWorkbook();
response.setHeader("Set-Cookie", "fileDownload=true; path=/");
response.setHeader("Content-Disposition", String.format("attachment;
filename=\""+newString((saveFileName).getBytes("KSC5601"),"8859_1")+".xlsx\""));
OutputStream outputStream=response.getOutputStream();
wb.write(outputStream);
wb.dispose();
outputStream.close();
wb.close();
}
this is nginx config
//default.conf
server {
listen 80;
server_name domain;
client_max_body_size 2000M;
location /manageChannel {
proxy_pass http://localhost:19912;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Origin "";
}
location /resources/ {
alias /var/www/advertise.alancorp.co.kr/static/resources/;
autoindex off;
access_log off;
expires 1M;
}
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:19912;
proxy_redirect off;
charset utf-8;
# buffer size
proxy_buffering on;
proxy_buffer_size 1024k;
proxy_buffers 1024 1024k;
client_body_buffer_size 1024k;
proxy_busy_buffers_size 1024k;
}
}
What should I add to the nginx configuration
i got the answer here: https://gist.github.com/zeroasterisk/5535517
i thought the problem is about content-type header.
my nginx version is 1.1.0, add application/vnd.openxmlformats-officedocument.spreadsheetml.sheet xlsx; into conf/mime.types.
Related
So my website has two parts:
1- /api, /oauth and /assets locations are redirected to a laravel app and using a simple proxy_pass to their docker port
2- the web app, which is a react app. We make an image of the web app(so no files are transferred to the server) and run it on a port, say 3000.
This is a summary of my Nginx configuration:
server {
server_name mywebsite.com;
location /api {
proxy_pass http://localhost:8080/api;
proxy_set_header Host $host;
} //the same with other Laravel paths
location / {
proxy_pass http://localhost:3000/;
proxy_set_header Host $host;
proxy_redirect off;
}
}
The problem is if the user goes to a page, say site.com/profile and refreshes it, they get a 404 error. Googling a lot resulted to use try_files .. index.html which works with local files, but not when using proxy_pass and docker images.
More googling had me find a solution that actually worked:
server {
server_name mywebsite.com;
location /api {
proxy_pass http://localhost:8080/api;
proxy_set_header Host $host;
} //the same with other Laravel paths
location / {
proxy_pass http://localhost:3000/;
proxy_set_header Host $host;
proxy_redirect off;
proxy_intercept_errors on;
recursive_error_pages on;
error_page 404 = #rewrite_proxy;
}
location #rewrite_proxy {
rewrite ^/(.*)$ /index.html?$1 break;
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
}
}
It's brilliant and works like a charm. Now the problem is, I'm looking for a solution to give control of ACTUAL 404 errors to the web app, so it can react in different ways depending on the URL. Any suggestions?
Check out this, hope this will help you out
server {
listen 80;
location /api {
proxy_pass http://backend/api;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location / {
proxy_pass http://frontend;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
try_files $uri $uri/ /index.html #backend;
error_page 405 #backend;
}
location #backend {
proxy_pass http://frontend;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
}
}
I have 2 apps running on ASW Symfony on port 8000 (local) and react 3000(local) but accessible through TCP on port 80 redirections was achieved by listening of port 80 within nginx server.
server {
listen 80;
server_name example.info www.example.info;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
}
}
server {
listen 8000;
server_name example.info www.example.info;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
}
}
I have tried to listen and redirect of two ports but without success.
Within the server, Symfony application is accessible with curl http://127.0.0.1:8000
From outside in my react app I am sending api requests to asw.external.ip (123.123.123.123:800) but I get timeout. How could I access my back-end from outside?
AWS ElasticBeanstalk - Configuring the Proxy Server to your back-end
You can use this config file to your Aws Ec2 as well.
/etc/nginx/conf.d/proxy.conf
upstream nodejs {
server 127.0.0.1:5000;
keepalive 256;
}
server {
listen 8080;
access_log /var/log/nginx/access.log main;
location / {
proxy_pass http://nodejs;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
gzip on;
gzip_comp_level 4;
gzip_types text/html text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
## Optional configuration if you want to allow AWS
## to cache your static files
location /static {
alias /var/app/current/static;
}
}
Edit - Configuring Nginx for Symfony
server {
listen 8080;
server_name sf2testproject.dev;
root /home/maurits/public_html/web;
location / {
# try to serve file directly, fallback to rewrite
try_files $uri #rewriteapp;
}
location #rewriteapp {
# rewrite all to app.php
rewrite ^(.*)$ /app.php/$1 last;
}
location ~ ^/(app|app_dev|config)\.php(/|$) {
fastcgi_pass 127.0.0.1:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
}
}
Where:
listen is the port that your application communicate with the world.
fastcgi_pass is a binary protocol for interfacing interactive programs with a web server
References:
Aws ElasticBeanstalk - Nodejs platform proxy
Symfony Hhvm 3 nginx 1.4 vs PHP 5.5 apache 2.4
FastCGI Oficial Example
I have a front end reactjs being served by nginx. shown here:
server {
listen 80 default_server;
server_name website.* www.website.*;
root /home/developer/website/frontend/build;
location / {
try_files $uri /index.html;
}
location /api {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:4000;
}
}
Additionally, I have a second express app receiving requests at 127.0.0.1:4000. The front end calls fetch to 'api/something' and the express app receives that and handles it but does not respond, the client side errors with 504 (Gateway Time-out). Any ideas?
You are missing the upstream server directive. Try this
upstream api {
server 127.0.0.1:4000;
}
# remove www from the url
server {
listen 80;
server_name www.website.com;
return 301 $scheme://website.com$request_uri;
}
server {
listen 0.0.0.0:80;
server_name website.com website;
error_log /var/log/nginx/website.com-error.log error;
access_log /var/log/nginx/website.log;
# pass the request to the node.js server with the correct headers
location /api/ {
proxy_pass http://api/;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
proxy_ignore_headers Set-Cookie;
proxy_hide_header Set-Cookie;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-NginX-Proxy true;
}
}
I want to run my application under myurl.com/app1/ using reserve proxy(nginx) with angularjs. But when I call myurl.com/app1/ I only get index.html, static files still point to myurl.com/assets/ rather than myurl.com/app1/assets/
upstream smileyface {
server localhost:8081;
}
server {
access_log /var/log/nginx/myurl_access.log;
error_log /var/log/nginx/myurl_error.log warn;
# SSL configuration
listen 443 ssl;
listen [::]:443 ssl;
include snippets/ssl-myurl.com.conf;
include snippets/ssl-params.conf;
index index.html index.htm index.nginx-debian.html;
server_name myurl.com www.myurl.com;
location /app1/ {
index index.html;
alias /angular-app-folder/;
proxy_pass http://smileyface/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
location ~ /.well-known {
allow all;
}
}
I have 3 AngularJS application each having their own ExpressJS backend. How do you deploy all 3 applications in this manner:
http://myapp.com/<app1>
http://myapp.com/<app2>
http://myapp.com/<app3>
Additional Information:
- I'm deploying the application in AWS EC2 Instance
- I tried merging the application in a single ExpressJS app. While this works, I still want to know if the case above is possible
Sure it's possible. You'll just need NGINX or Apche running as a reverse proxy for you.
Assuming your node apps are running on local ports 3000, 3001, and 3002, you'd setup a .conf file with those as upstream servers for the location tags like so:
. . .
location /app1 {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
. . .
location /app2 {
proxy_pass http://localhost:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Read up on more details here: https://www.digitalocean.com/community/tutorials/how-to-set-up-a-node-js-application-for-production-on-ubuntu-16-04
server {
listen 80;
server_name es.domain.com;
location /app1 {
rewrite ^/(.*) /$1 break;
proxy_ignore_client_abort on;
proxy_pass http://localhost:3001;
proxy_redirect http://localhost:3001 https://es.domain.com/appw;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
auth_basic "Elasticsearch Authentication";
auth_basic_user_file /etc/elasticsearch/user.pwd;
}
location /app2 {
rewrite ^/(.*) /$1 break;
proxy_ignore_client_abort on;
proxy_pass http://localhost:3002;
proxy_redirect http://localhost:3002 https://es.domain.com/app2;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
auth_basic "Elasticsearch Authentication";
auth_basic_user_file /etc/elasticsearch/user.pwd;
}
}
Please refer this link
http://www.minvolai.com/blog/2014/08/Setting-up-a-Secure-Single-Node-Elasticsearch-server-behind-Nginx/Setting-up-a-Secure-Single-Node-Elasticsearch-server-behind-Nginx/
You can setup AWS CloudFront and setup each application as Origins. It provides flexibility to route from single domain(Setup for CloudFront) to different Express Apps and also allows to cache Static content paths at Edge locations.