I have a API web server, there are two version api coexist, they are routed by url path, which is /shine/v7 and /shine/v8.
I made this by haproxy, but when I request /shine/v7/admin, it sometimes went to shine_v8_backend, and sometimes went to shine_v7_backend, I don't know why this happend, is anyone can help me?
there is my haproxy.conf
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
maxconn 4096
daemon
defaults
log global
option http-server-close
frontend http *:5000
mode tcp
timeout client 86400000
default_backend shine_v8_backend
acl shine_v7 path_dir /shine/v7
use_backend shine_v7_backend if shine_v7
backend shine_v8_backend
mode tcp
option httpclose
balance roundrobin
timeout server 86400000
timeout connect 5000
server host_0 127.0.0.1:5001
backend shine_v7_backend
mode tcp
option httpclose
balance roundrobin
timeout server 86400000
timeout connect 5000
server host_0 127.0.0.1:5002
I try to request /shine/v7/admin many times, there is the logs
$ sudo haproxy -f haproxy.conf -d
Available polling systems :
kqueue : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result FAILED
Total: 3 (2 usable), will use kqueue.
Using kqueue() as the polling mechanism.
00000000:http.accept(0004)=0006 from [127.0.0.1:55026]
00000000:shine_v7_backend.srvcls[0006:0007]
00000000:shine_v7_backend.clicls[0006:0007]
00000000:shine_v7_backend.closed[0006:0007]
00000001:http.accept(0004)=0006 from [127.0.0.1:55028]
00000001:shine_v8_backend.srvcls[0006:0007]
00000001:shine_v8_backend.clicls[0006:0007]
00000001:shine_v8_backend.closed[0006:0007]
00000002:http.accept(0004)=0006 from [127.0.0.1:55030]
00000002:shine_v7_backend.srvcls[0006:0007]
00000002:shine_v7_backend.clicls[0006:0007]
00000002:shine_v7_backend.closed[0006:0007]
00000003:http.accept(0004)=0006 from [127.0.0.1:55032]
00000003:shine_v8_backend.srvcls[0006:0007]
00000003:shine_v8_backend.clicls[0006:0007]
00000003:shine_v8_backend.closed[0006:0007]
00000004:http.accept(0004)=0006 from [127.0.0.1:55034]
00000004:shine_v7_backend.srvcls[0006:0007]
00000004:shine_v7_backend.clicls[0006:0007]
00000004:shine_v7_backend.closed[0006:0007]
I have found the problem, It can't use mode tcp here, must use mode http :(
and after fix, the haproxy.conf is
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
maxconn 4096
daemon
defaults
log global
mode http
option httplog
option dontlognull
option forwardfor
option http-server-close
frontend http *:5000
timeout client 86400000
acl shine_v8 path_dir /shine/v8
acl shine_v7 path_dir /shine/v7
use_backend shine_v8_backend if shine_v8
use_backend shine_v7_backend if shine_v7
backend shine_v8_backend
option httpclose
balance roundrobin
timeout server 86400000
timeout connect 5000
server host_0 127.0.0.1:5001
backend shine_v7_backend
option httpclose
balance roundrobin
timeout server 86400000
timeout connect 5000
server host_0 127.0.0.1:5002
it works fine.
Related
I have a CRA project running which in development mode (yarn start) is slow because of multiple requests being fired and many requests are stale for a long time. In production (deployed on an Apache server) this problem of stale requests holding back future requests is solved.
A difference I can spot is the localhost to api endpoint via a proxy configuration in package.json is running over HTTP/1 and the deployed variant runs over HTTP/2 which allow more requests to be handled simultaneously.
Does this theory of HTTP/1 over 2 makes any sense for my problem?
I can't find a way to allow my localhost to proxy over HTTP/2 to my remote server.
We are facing this below error in Vespa, after restarting the cluster we got this below issue.
1600455444.680758 10.10.000.00 1030/1 container Container.com.yahoo.filedistribution.fileacquirer.FileAcquirerImpl info Retrying waitFor for file 'e0ce64d459828eb0': 103 -- Request timed out after 60.0 seconds.
1600455446.819853 10.10.000.00 32752/146 configproxy configproxy.com.yahoo.vespa.filedistribution.FileReferenceDownloader info Request failed. Req: request filedistribution.serveFile(e0ce64d459828eb0,0)\nSpec: tcp/10.10.000.00:19070, error code: 103, set error for connection and use another for next request
We faced this issue second time, earlier we kept it ideal and it was resolved automatically, but this time it is persistent.
Looks like the configproxy is unable to talk to the config server (which is listening to port 19070 on the same host: Spec: tcp/10.10.000.00:19070). Is the config server really runnning and listening on port 19070 on this host? Try running the vespa-config-status script to see if all is well with the config system
I can not make the Apache Solr installation respond successfully.
I comment quickly, I have a Wordpress droplet in digital ocean, I installed Apache sorl and apparently it is running correctly.
$ service solr status
● solr.service - LSB: Controls Apache Solr as a Service
Loaded: loaded (/etc/init.d/solr; bad; vendor preset: enabled)
Active: active (exited) since Tue 2019-06-18 20:20:55 UTC; 1 day 9h ago
Docs: man:systemd-sysv-generator(8)
Process: 4342 ExecStop=/etc/init.d/solr stop (code=exited, status=0/SUCCESS)
Process: 4458 ExecStart=/etc/init.d/solr start (code=exited, status=0/SUCCESS)
Tasks: 0
Memory: 0B
CPU: 0
The ip xxx.xx.xxx.xxx is my droplet IP, and I only want from this IP queries can be made to apache sorl.
$ ufw status
WARN: / is group writable!
Status: active
To Action From
-- ------ ----
22 ALLOW Anywhere
443 ALLOW Anywhere
80 ALLOW Anywhere
Nginx HTTP ALLOW Anywhere
8983 ALLOW xxx.xx.xxx.xxx
22 (v6) ALLOW Anywhere (v6)
443 (v6) ALLOW Anywhere (v6)
80 (v6) ALLOW Anywhere (v6)
Nginx HTTP (v6) ALLOW Anywhere (v6)
If I filter the ips to see if as is configured
$ sudo netstat -lntp | grep 8983
tcp 0 0 127.0.0.1:8983 0.0.0.0:* LISTEN 4507/java
Ping from inside of my droplet server (ssh).
$ ping http://localhost:8983/solr
ping: unknown host http://localhost:8983/solr
How to get an answer from apache solr, what is happening?
First, what you're seeing is caused by Solr only listening to the loopback interface by default. This is to avoid people inadvertently exposing Solr directly to the internet (which it is not meant for). The netstat command shows this:
tcp 0 0 --> 127.0.0.1:8983 <--
To change what interface Solr listens to, you can add -Djetty.host=<ip> to the SOLR_OPTS used when starting Solr. How this is done depends on how you've installed Solr, but usually through solr.in.sh or something similar.
Second: Make sure your firewall is set to disallow connections by default, otherwise the single Allow for the port won't change anything. Test this from outside your host to make sure you're not exposing Solr to the whole internet.
Third: ping does not work like that. ping <ip> is how you'd use the ping utility, and this sends an ICMP packet to the ip (or host which is resolved to an ip) and waits for a response. It'll not work against a http URL, a given port or an endpoint path.
I'm trying to deploy a Jhipster application (Spring Boot + AngularJS) to Bluemix Tomcat. However I always get this error:
Error restarting application: Start app timeout
TIP: The application must be listening on the right port. Instead of hard coding the port, use the $PORT environment variable.
The complete error on Bluemix console is:
App instance exited with guid 1c76324f-57fb-4a00-b203-499519b4367c payload:
{
"cc_partition"=>"default",
"droplet"=>"1c76324f-57fb-4a00-b203-499519b4367c",
"version"=>"0103e173-b6d3-4daa-a291-b5792c16b69b",
"instance"=>"0c09506c30764b6c921cabb9a55d9e45",
"index"=>0,
"reason"=>"CRASHED",
"exit_status"=>255,
"exit_description"=>"failed to accept connections within health check timeout",
"crash_timestamp"=>1479341938
}
Instance (index 0) failed to start accepting connections
I've already tried to change the application-dev.yml config to
server:
port: ${VCAP_APP_PORT}
Or
server:
port: 80
However, I have not had any success. How can I pass the port variable to the Jhipster configuration?
Using Nginx, I'm getting the error:
Error 502 - Bad Request
The server could not resolve your request for uri: http://domain.name/file/path
Oddly, I only get this error when my phone is using data from my cell carrier. The server serves everything just fine when I am using my phone on Wi-Fi or when I'm using a desktop computer. It even works when I am using my iPad conneted to my phone via Wi-Fi with my phone acting as a mobile hotspot.
The 502 error code suggests that there's an issue with reverse proxying or serving requests with php-fpm. I'm doing neither of these.
Because this error is happening only under specific circumstances, I'm thinking it has to be something with the request my phone is sending. (Nexus 5, Chrome, Android Lollipop)
My nginx.conf and other configuration files are passing tests. I used:
sudo nginx -t
and it said "the configuration file syntax is okay" and "configuration file test is successful."
What could be going on?
After tripple-checking my Nginx configuration, I had the idea to look at all tcp activity on port 80 of my server.
I installed tcpdump:
sudo apt-get install tcpdump
Then ran it, looking only for port 80 tcp traffic:
sudo tcpdump 'tcp port 80' -i eth0
I noticed that all other traffic was just 'IP', but when I sent a request from my phone, it was 'IP6'.
My server wasn't ipv6 enabled, but that's an easy fix with an additional listen directive:
listen [::]:80;