connection problem using mobile hotspot in linux pc - wireguard

I am using mobile hotspot in my linux pc to access the internet but after running the wg-quick command, I can't access the internet. I tested the same conf file in my android phone with cellular data connection and It is working fine. I test the same conf file with another wifi connection with my linux pc and there was no problem. But I do not know why this doesn't work when I am using mobile hotspot with cellular data connection.
Client Conf:
[Interface]
PrivateKey = <private_key>
Address = 10.9.5.210/24
DNS = 8.8.8.8
[Peer]
PublicKey = <public_key>
AllowedIPs = 0.0.0.0/0, ::/0
Endpoint = <server_address>:2650
PersistentKeepalive = 25
wg log:
interface: us
public key: ....AsmklKG6v....=
private key: (hidden)
listening port: 58604
fwmark: 0xca6c
peer: ...ELnTMu9flhUT...=
endpoint: <server_address>:2650
allowed ips: 0.0.0.0/0, ::/0
latest handshake: 1 minute, 57 seconds ago
transfer: 1.25 KiB received, 57.74 KiB sent
persistent keepalive: every 25 seconds

Related

Istio istio-ingressgateway throwing "no cluster match for URL '/'"

I have Istio installed on docker-desktop. In general it works fine. I'm attempting to setup an http-based match on a very simple virtual service, but I'm only able to get 404s. Here are the technical details.
My endpoint image is hashi http-echo which uses the net/http library to create a trivial http server that returns a message you supply. It works just fine and couldn't be more trivial.
Here is my pod and service configuration:
kind: Pod
apiVersion: v1
metadata:
name: a
labels:
app: a
version: v1
spec:
containers:
- name: a
image: hashicorp/http-echo
args:
- "-text='this is service a: v1'"
- "-listen=:6789"
---
kind: Service
apiVersion: v1
metadata:
name: a-service
spec:
selector:
app: a
version: v1
ports:
# Default port used by the image
- port: 6789
targetPort: 6789
name: http-echo
And here is an example of the service working by my curling it from another pod in the same namespace:
/ # curl 10.1.0.29:6789
'this is service a: v1'
And here's the pod running in the docker-desktop cluster:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
a 2/2 Running 0 45h 10.1.0.29 docker-desktop <none> <none>
And here is the service registering and administrating the pod:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
a-service ClusterIP 10.101.113.9 <none> 6789/TCP 45h app=a,version=v1
Here is my istio istio-ingressgateway pod specification via Helm (seems to work fine) which I list as this is the only part of the installation I've changed and the change itself is utterly trivial (just add a single new port block which seems to work fine as listening is indeed occurring):
gateways:
istio-ingressgateway:
name: istio-ingressgateway
labels:
app: istio-ingressgateway
istio: ingressgateway
ports:
- port: 15021
targetPort: 15021
name: status-port
protocol: TCP
- port: 80
targetPort: 8080
name: http2
protocol: TCP
- port: 443
targetPort: 8443
name: https
protocol: TCP
- port: 6789
targetPort: 6789
name: http-echo
protocol: TCP
And here is the kubectl get svc on the istio-ingressgateway just to show that indeed I have an external-ip and things look normal:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
istio-ingressgateway LoadBalancer 10.109.63.15 localhost 15021:30095/TCP,80:32454/TCP,443:31644/TCP,6789:30209/TCP 2d16h app=istio-ingressgateway,istio=ingressgateway
istiod ClusterIP 10.96.155.154 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 2d16h app=istiod,istio=pilot
Here's my virtualservice:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: a-service
namespace: default
spec:
hosts:
- 'a-service.default.svc.cluster.local'
gateways:
- gateway
http:
- match:
- port: 6789
route:
- destination:
host: 'a-service.default.svc.cluster.local'
port:
number: 6789
Here's my gateway:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
namespace: default
spec:
selector:
app: istio-ingressgateway
servers:
- port:
number: 6789
name: http-echo
protocol: http
hosts:
- 'a-service.default.svc.cluster.local'
And then finally here's a debug log from the istio-ingressgateway showing that despite all these seemingly correct pod, service, gateway, virtualservice and ingressgateway configs, the ingressgateway is only return 404s:
2021-09-27T15:34:41.001773Z debug envoy connection [C367] closing data_to_write=143 type=2
2021-09-27T15:34:41.001779Z debug envoy connection [C367] setting delayed close timer with timeout 1000 ms
2021-09-27T15:34:41.001786Z debug envoy pool [C7] response complete
2021-09-27T15:34:41.001791Z debug envoy pool [C7] destroying stream: 0 remaining
2021-09-27T15:34:41.001925Z debug envoy connection [C367] write flush complete
2021-09-27T15:34:41.002215Z debug envoy connection [C367] remote early close
2021-09-27T15:34:41.002279Z debug envoy connection [C367] closing socket: 0
2021-09-27T15:34:41.002348Z debug envoy conn_handler [C367] adding to cleanup list
2021-09-27T15:34:41.179213Z debug envoy conn_handler [C368] new connection from 192.168.65.3:62904
2021-09-27T15:34:41.179594Z debug envoy http [C368] new stream
2021-09-27T15:34:41.179690Z debug envoy http [C368][S14851390862777765658] request headers complete (end_stream=true):
':authority', '0:6789'
':path', '/'
':method', 'GET'
'user-agent', 'curl/7.64.1'
'accept', '*/*'
'version', 'TESTING'
2021-09-27T15:34:41.179708Z debug envoy http [C368][S14851390862777765658] request end stream
2021-09-27T15:34:41.179828Z debug envoy router [C368][S14851390862777765658] no cluster match for URL '/'
2021-09-27T15:34:41.179903Z debug envoy http [C368][S14851390862777765658] Sending local reply with details route_not_found
2021-09-27T15:34:41.179949Z debug envoy http [C368][S14851390862777765658] encoding headers via codec (end_stream=true):
':status', '404'
'date', 'Mon, 27 Sep 2021 15:34:41 GMT'
'server', 'istio-envoy'
Here's istioct proxy-status:
istioctl proxy-status ⎈ docker-desktop/istio-system
NAME CDS LDS EDS RDS ISTIOD VERSION
a.default SYNCED SYNCED SYNCED SYNCED istiod-b9c8c9487-clkkt 1.11.3
istio-ingressgateway-5797689568-x47ck.istio-system SYNCED SYNCED SYNCED SYNCED istiod-b9c8c9487-clkkt 1.11.3
And here's istioctl pc cluster $ingressgateway:
SERVICE FQDN PORT SUBSET DIRECTION TYPE DESTINATION RULE
BlackHoleCluster - - - STATIC
a-service.default.svc.cluster.local 6789 - outbound EDS
agent - - - STATIC
istio-ingressgateway.istio-system.svc.cluster.local 80 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 443 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 6789 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 15021 - outbound EDS
istiod.istio-system.svc.cluster.local 443 - outbound EDS
istiod.istio-system.svc.cluster.local 15010 - outbound EDS
istiod.istio-system.svc.cluster.local 15012 - outbound EDS
istiod.istio-system.svc.cluster.local 15014 - outbound EDS
kube-dns.kube-system.svc.cluster.local 53 - outbound EDS
kube-dns.kube-system.svc.cluster.local 9153 - outbound EDS
kubernetes.default.svc.cluster.local 443 - outbound EDS
prometheus_stats - - - STATIC
sds-grpc - - - STATIC
xds-grpc - - - STATIC
zipkin - - - STRICT_DNS
And here's istioctl pc listeners on the same ingress:
ADDRESS PORT MATCH DESTINATION
0.0.0.0 6789 ALL Route: http.6789
0.0.0.0 15021 ALL Inline Route: /healthz/ready*
0.0.0.0 15090 ALL Inline Route: /stats/prometheus*
And finally here's istioctl routes:
NOTE: This output only contains routes loaded via RDS.
NAME DOMAINS MATCH VIRTUAL SERVICE
http.6789 a-service.default.svc.cluster.local /* a-service.default
* /stats/prometheus*
* /healthz/ready*
I've tried numerous different configurations from changing selectors, to making sure port names match to trying different ports. If I change my virtualservice from http to tcp the port match works great. But because my ultimate goal with this is to do more advanced header-based matching I need to be matching on http. Any insight would be greatly appreciated!
It turned out the problem was that I had specified my service in my hosts directive in both my gateway and virtualservice. Specifying a service as a hosts entry is almost certainly never correct, though one can "workaround" this by adding a host header to curl, i.e. curl ... -H 'Host: kubernetes.docker.internal' .... But the correct solution is to simply add correct host entries, i.e. - mysite.mycompany.com etc. Hosts in this case are like vhosts in Apache; they're an fqdn that resolves to something the mesh and cluster can use to send requests to. host, however, in virtualservice destination is the service, which is a bit convoluted and is what threw me.

Unable to SSH into wireguard IP until I ping another server from inside the server

I have wireguard setup on a machine (call it MachineA, with the IP 10.42.0.19). I have my laptop configured with the IP 10.42.0.15, call it LaptopB. I am able to SSH into MachineA from the LaptopB when I connect both peers using ssh root#MachineA. Then, if I wait a while, I can no longer SSH into the MachineA from LaptopB. For example, the same command ssh root#MachineA just hangs.
Using -vvvv shows me this:
$ ssh -vvvv root#10.42.0.19
OpenSSH_8.3p1 Ubuntu-1ubuntu0.1, OpenSSL 1.1.1f 31 Mar 2020
debug1: Reading configuration data /home/xrd/.ssh/config
...
debug2: ssh_connect_direct
debug1: Connecting to 10.42.0.19 [10.42.0.19] port 22.
And, it never connects.
There is a simple fix: from inside the machine, ping any other Wireguard machine on the network. MachineA is a DigitalOcean droplet. If I use the web console to login, and then ping any other peer on the network (say 10.42.0.4), then immediately after the ping starts, the SSH connection completes.
How do I troubleshoot this?
I have not restarted wireguard on either LaptopB nor MachineA. Both appear to be connected.
In my wg0.conf on both ends they are more or less like this:
[Interface]
Address = 10.42.0.19/24
PrivateKey = DontYouWishYouHadThis
DNS = 10.42.0.1,8.8.8.8
[Peer]
PublicKey = SomePublicKeyIsHere
AllowedIPs = 10.42.0.0/24
Endpoint = 33.33.33.33.:51280

Apache solr does not respond only from a specific IP in Digital Ocean

I can not make the Apache Solr installation respond successfully.
I comment quickly, I have a Wordpress droplet in digital ocean, I installed Apache sorl and apparently it is running correctly.
$ service solr status
● solr.service - LSB: Controls Apache Solr as a Service
Loaded: loaded (/etc/init.d/solr; bad; vendor preset: enabled)
Active: active (exited) since Tue 2019-06-18 20:20:55 UTC; 1 day 9h ago
Docs: man:systemd-sysv-generator(8)
Process: 4342 ExecStop=/etc/init.d/solr stop (code=exited, status=0/SUCCESS)
Process: 4458 ExecStart=/etc/init.d/solr start (code=exited, status=0/SUCCESS)
Tasks: 0
Memory: 0B
CPU: 0
The ip xxx.xx.xxx.xxx is my droplet IP, and I only want from this IP queries can be made to apache sorl.
$ ufw status
WARN: / is group writable!
Status: active
To Action From
-- ------ ----
22 ALLOW Anywhere
443 ALLOW Anywhere
80 ALLOW Anywhere
Nginx HTTP ALLOW Anywhere
8983 ALLOW xxx.xx.xxx.xxx
22 (v6) ALLOW Anywhere (v6)
443 (v6) ALLOW Anywhere (v6)
80 (v6) ALLOW Anywhere (v6)
Nginx HTTP (v6) ALLOW Anywhere (v6)
If I filter the ips to see if as is configured
$ sudo netstat -lntp | grep 8983
tcp 0 0 127.0.0.1:8983 0.0.0.0:* LISTEN 4507/java
Ping from inside of my droplet server (ssh).
$ ping http://localhost:8983/solr
ping: unknown host http://localhost:8983/solr
How to get an answer from apache solr, what is happening?
First, what you're seeing is caused by Solr only listening to the loopback interface by default. This is to avoid people inadvertently exposing Solr directly to the internet (which it is not meant for). The netstat command shows this:
tcp 0 0 --> 127.0.0.1:8983 <--
To change what interface Solr listens to, you can add -Djetty.host=<ip> to the SOLR_OPTS used when starting Solr. How this is done depends on how you've installed Solr, but usually through solr.in.sh or something similar.
Second: Make sure your firewall is set to disallow connections by default, otherwise the single Allow for the port won't change anything. Test this from outside your host to make sure you're not exposing Solr to the whole internet.
Third: ping does not work like that. ping <ip> is how you'd use the ping utility, and this sends an ICMP packet to the ip (or host which is resolved to an ip) and waits for a response. It'll not work against a http URL, a given port or an endpoint path.

Xdebug Netbean in Windows 10 doesn't connect to remote host

After upgrade to windows 10, Xdebug unable to connect to remote host. The log says:
I: Checking remote connect back address.
I: Remote address found, connecting to ::1:9001.
E: Could not connect to client. :-(
The following is xdebug configuration:
xdebug.remote_enable = true
xdebug.remote_handler=dbgp
xdebug.remote_connect_back = 1
xdebug.remote_host=localhost
xdebug.remote_port=9001
xdebug.idekey=netbeans-xdebug
xdebug.remote_log="D:/wamp/tmp/xdebug.log"
output_buffering=off
xdebug.profiler_enable = 0[/code]
I didn't forget to set Debugger port to 9001 in the Netbean options.
What did I miss?
regards

Why does Nginx give a 502 error only for mobile devices?

Using Nginx, I'm getting the error:
Error 502 - Bad Request
The server could not resolve your request for uri: http://domain.name/file/path
Oddly, I only get this error when my phone is using data from my cell carrier. The server serves everything just fine when I am using my phone on Wi-Fi or when I'm using a desktop computer. It even works when I am using my iPad conneted to my phone via Wi-Fi with my phone acting as a mobile hotspot.
The 502 error code suggests that there's an issue with reverse proxying or serving requests with php-fpm. I'm doing neither of these.
Because this error is happening only under specific circumstances, I'm thinking it has to be something with the request my phone is sending. (Nexus 5, Chrome, Android Lollipop)
My nginx.conf and other configuration files are passing tests. I used:
sudo nginx -t
and it said "the configuration file syntax is okay" and "configuration file test is successful."
What could be going on?
After tripple-checking my Nginx configuration, I had the idea to look at all tcp activity on port 80 of my server.
I installed tcpdump:
sudo apt-get install tcpdump
Then ran it, looking only for port 80 tcp traffic:
sudo tcpdump 'tcp port 80' -i eth0
I noticed that all other traffic was just 'IP', but when I sent a request from my phone, it was 'IP6'.
My server wasn't ipv6 enabled, but that's an easy fix with an additional listen directive:
listen [::]:80;

Resources