I'm running a remote react server with a NGINX load balancer.
When I use HTTP, I have no issues. But when I set HTTPS=True in my react environment, I get bad gateway when I try connecting the webpage.
The NGINX log says the following:
2021/07/06 10:01:07 [error] 10365#0: *62 upstream prematurely closed connection while reading response header from upstream, client: 155.4.218.180, server: _, request: "GET /favicon.ico HTTP/1.1", upstream: "http://10.50.3.152:80/favicon.ico", host: "remdent.com", referrer: "http://remdent.com/"
And 3 more similar messages claiming upstream was prematurely closed.
How do I remedy this?
Related
I'm getting a proxy error
Proxy error: Could not proxy request /api/auth/signin from localhost:51171 to http://localhost:3000/
I have noticed that on the starting the server is running on two different ports...
info: Microsoft.AspNetCore.SpaServices[0]
Starting create-react-app server on port 51171...
info: Microsoft.Hosting.Lifetime[14]
Now listening on: http://localhost:5000
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Development
In the package.json I defined the proxy as follows:
"proxy": "http://localhost:3000"
Changed proxy to: "proxy": "http://localhost:5000" and now its working...
Still dont know why smth is running on 51171
I have installed the Varnish with Apach2 and setup that using the HTTP proxy apache module and used the headers to get the Data over HTTP and send it to HTTPS using reverse proxy.
ProxyPreserveHost On
ProxyPass / http://127.0.0.1:80/
ProxyPassReverse / http://127.0.0.1:80/
RequestHeader set X-Forwarded-Port “443”
RequestHeader set X-Forwarded-Proto “https
But the issue I am facing this setup is the Browser error Content is loading from HTTP over HTTPS has been blocked.
Mixed Content: The page at '' was loaded over HTTPS, but
requested an insecure stylesheet ''. This request has been
blocked; the content must be served over HTTPS.
Please help to understand where I am wrong and how can I make this work?
Thank you in Advance.
There's not a whole lot of context about the setup and the configuration, but based on the information you provided I'm going to assume you're using Apache to first terminate the TLS connection and then forward that traffic to Varnish.
I'm also assuming Apache is also configured as the backend in Varnish listening on a port like 8080 whereas Varnish is on 80 and the HTTPS Apache vhost is on 443.
Vary header
The one thing that might be missing in your setup is a cache variation based on the X-Forwarded-Proto header.
I would advise you to set that cache variation using the following configuration:
Header append Vary: X-Forwarded-Proto
This uses mod_headers and can either be set in your .htaccess file or your vhost configuration.
It should allow Varnish to be aware of the variations based on the Vary: X-Forwarded-Proto header and store a version for HTTP and one for HTTPS.
This will prevent HTTP content being stored when HTTPS content is requested and vice versa.
A good way to simulate the issue
If you want to make sure the issue behaves as I'm expecting it to, please perform a test using the following steps:
Clear your cache through sudo varnishadm ban obj.status "!=" 0
Run varnishlog -g request -q "ReqUrl eq '/'" to filter logs for the. homepage
Call the HTTP version of the homepage and ensure its stored in the cache
Capture the log output for this transaction and store it somewhere
Call that same page over HTTPS and check whether or not the mixed content errors occur
Capture the log output for this transaction and store it somewhere
Then fix the issue through the Vary: X-Forwarded-Proto header and try the testcase again.
In case of problems, just add the 2 log transactions to your question (1 for the miss, 1 for the hit) and I'll examine it for you
I have several Firebase-Queue NodeJS instances running in my project on App Engine.
The instances seems to be constantly running, producing errors after trying a GET request.
13:33:36.078
{"method":"GET","latencySeconds":"0.000","referer":"-","host":"-","user":"-","code":"502","remote":"130.211.0.96","agent":"GoogleHC/1.0","path":"/_ah/health","size":"166"}
13:33:36.421
{"method":"GET","latencySeconds":"0.000","referer":"-","host":"-","user":"-","code":"502","remote":"130.211.1.229","agent":"GoogleHC/1.0","path":"/_ah/health","size":"166"}
13:33:37.000
[error] 32#0: *80631 connect() failed (111: Connection refused) while connecting to upstream, client: 130.211.1.11, server: , request: "GET /_ah/health HTTP/1.1", upstream: "http://172.18.0.2:8080/_ah/health", host: "10.128.0.5"
13:33:37.000
[error] 32#0: *80633 connect() failed (111: Connection refused) while connecting to upstream, client: 130.211.3.85, server: , request: "GET /_ah/health HTTP/1.1", upstream: "http://172.18.0.2:8080/_ah/health", host: "10.128.0.5"
My App.yaml file when i deploy my Node apps looks like this:
runtime: nodejs
env: flex
service: album-queue
skip_files:
- ^(node_modules)
handlers:
- url: .*
script: build/index.js
I think it must have something to do with all these these GET requests it's trying to do internally, but I don't know how to stop them / fix it. My bills are raking up fairly quickly, so would be pretty nice to get it fixed >_<
App Engine Flex doesn't scale to zero like the standard environment does. There will always be at least one instance running (default is actually 2). The requests you see are the normal health checks.
I'm working on an Angular-based SPA using Nginx and HTTPD ; I recently realized that some non-parsed AngularJS expressions are being logged in the Nginx error log :
2016/03/24 10:47:53 [error] 63879#0: *2639 open() "/var/www/mysite/assets/css/png/{{ client.logo }}_bw.png" failed (2: No such file or directory), client: xxx.xxx.xxx.xxx, server: example.com, request: "GET /css/png/{{%20client.logo%20}}_bw.png HTTP/1.1", host: "www.example.com", referrer: "https://www.example.com/my-page"
2016/03/24 10:48:34 [error] 63879#0: *2789 open() "/var/www/mysite/assets/css/png/{{ src }}-small.png" failed (2: No such file or directory), client: xxx.xxx.xxx.xxx, server: example.com, request: "GET /css/png/{{%20src%20}}-small.png HTTP/1.1", host: "www.example.com", referrer: "https://www.example.com/"
2016/03/24 10:48:37 [error] 63879#0: *2813 open() "/var/www/mysite/assets/css/png/{{ src }}-small.png" failed (2: No such file or directory), client: xxx.xxx.xxx.xxx, server: example.com, request: "GET /assets/css/png/%7B%7B%20src%20%7D%7D-small.png HTTP/1.1", host: "www.example.com", referrer: "https://www.example.com/my-page"
On the website the expressions are correctly evaluated are the images shown normally, but there may be indeed a short processing time. How do I prevent Nginx from logging these expressions before they are evalutated ?
You need to use ng-src when displaying images in your application - otherwise the browser will fire off requests for the unparsed expression to the server.
Wrong:
<img src="assets/css/png/{{ client.logo }}_bw.png" />
Right:
<img ng-src="assets/css/png/{{ client.logo }}_bw.png" />
More information: https://docs.angularjs.org/api/ng/directive/ngSrc
I am having issues setting up a BOSH service for a webchat. As XMPP server I'm using OpenFire and I'm already able to connect to the server using the Pidgin client. What I've done is the following:
First of all I've enabled the proxy using a2enmod proxy proxy_http. Then I went to edit the proxy.conf and added these in the end
ProxyVia On
ProxyErrorOverride On
ProxyPass /http-bind http://localhost:7070/http-bind
ProxyPassReverse /http-bind http://localhost:7070/http-bind
However, when i try to reach http://example.com/http-bind I get the following:
HTTP ERROR: 400
Problem accessing /http-bind/. Reason:
Bad Request
Powered by Jetty://
What am I doing wrong?
No any error in fact.
While you see the result, which measn that all proxy settings of yours are correct, as the http-bind needs to accept the POST(xml format) data as its true request, it is why the openfire server return 404 to you.