How to route mssql traffic through an Istio egress gateway - sql-server

I am trying to run a .Net 6 demonstration application in an Istio service mesh (Istio 1.16.1 in an AKS cluster). This application uses a sqlserver 2019 located outside the cluster and I would like to route all outgoing traffic, including mssql, through an egress gateway.
Please note this application also use OpenId Connect and use keytabs (Kerberos traffic), I have successfully managed to route those requests through the egress gateway but not the mssql traffic.
I have created the service mesh with istioctl and the following configuration file
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: default
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
defaultConfig:
tracing:
sampling: 100
outboundTrafficPolicy:
mode: REGISTRY_ONLY
components:
pilot:
k8s:
nodeSelector:
agentpool: svcmaster
ingressGateways:
- name: istio-ingressgateway
enabled: true
k8s:
nodeSelector:
kubernetes.io/os: linux
egressGateways:
- name: istio-egressgateway
enabled: true
k8s:
nodeSelector:
kubernetes.io/os: linux
Here is the ServiceEntry for the database
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: mssql-contoso-com
namespace: linux
spec:
hosts:
- mssql.contoso.com
addresses:
- 10.1.0.5
ports:
- number: 1433
name: mssql
protocol: TLS
- number: 443
name: tls
protocol: TLS
location: MESH_EXTERNAL
resolution: DNS
Here is the gateway (it includes the host
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: egress-gateway
namespace: linux
spec:
selector:
istio: egressgateway
servers:
- port:
number: 443
name: tls
protocol: TLS
hosts:
- "adfs.contoso.com"
- "mssql.contoso.com"
tls:
mode: "PASSTHROUGH"
- port:
number: 80
name: tcp
protocol: TCP
hosts:
- "controller.contoso.com"
And finally, the VirtualService. I have not defined a DestinationRule because it is actually useless, the OIDC and Kerberos traffic are routed correctly without them and I have tried to add it out of desperation without solving the issue.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: "outgoing-mssql-traffic"
namespace: linux
spec:
hosts:
- mssql.contoso.com
gateways:
- egress-gateway
- mesh
tls:
- match:
- gateways:
- mesh
port: 1433
sniHosts:
- mssql.contoso.com
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
port:
number: 443
weight: 100
- match:
- gateways:
- egress-gateway
port: 443
sniHosts:
- mssql.contoso.com
route:
- destination:
host: mssql.contoso.com
port:
number: 1433
weight: 100
Regarding the details of the application call to the SQL Server, I am using a regular SQLConnection with the following connection string:
Server=mssql.contoso.com;Initial Catalog=Demonstration;Integrated Security=true;TrustServerCertificate=true
As a result, I get the following error in the application log:
Microsoft.Data.SqlClient.SqlException (0x80131904): A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: TCP Provider, error: 35 - An internal exception was caught)
---> System.IO.IOException: Unable to read data from the transport connection: Connection reset by peer.
---> System.Net.Sockets.SocketException (104): Connection reset by peer
at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 count)
Somehow the TLS handshake fails. When consulting the logs of the sidecar container and of the egress gateway, I cannot see the traffic to the database. I have also monitored the traffic on the SQLServer machine with Wireshark and I cannot see TCP traffic on port 1433.
The application works fine when the virtual service is deleted so the issue is really related to the routing through the egress gateway.
Any help or insight would be appreciated.

So I found out the reason a bit by accident while trying different configurations and checking the Envoy logs. The Istio Egress gateway service exposes only two ports (at least by default): 80 and 443. It seems that each port can only serve one VirtualService and it is first come first served. I had declared the VirtualService for mssql last, so it was not served.
Furthermore, I had to switch to a TCP route. I failed to get it working with a TLS route.
Here is the ServiceEntry:
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: mssql-contoso-com
namespace: linux
spec:
hosts:
- mssql.contoso.com
addresses:
- 10.1.0.5
ports:
- number: 1433
name: mssql
protocol: TCP
- number: 80
name: tls
protocol: TCP
location: MESH_EXTERNAL
resolution: DNS
And here is the VirtualService:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: "outgoing-mssql-traffic"
namespace: linux
spec:
hosts:
- mssql.contoso.com
gateways:
- egress-gateway
- mesh
tcp:
- match:
- gateways:
- mesh
port: 1433
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
port:
number: 80
weight: 100
- match:
- gateways:
- egress-gateway
port: 80
route:
- destination:
host: mssql.contoso.com
port:
number: 1433
weight: 100
I guess that I will have to either deploy several egress gateways or find a way to add ports to the existing one.

Related

Connect React Client running on local PC to Kubernetes Cluster on Azure

I have 2 deployments + services running on Azure: react client and nodejs auth.
I have registered a public IP on Azure which I added to my windows host file (= myexample.com).
Typing the URL in the browser, the client opens and requests go to auth service as expected.
Now I want to run the client locally (with npm start) but connect to auth service still running on Azure.
I removed the client from the cloud deployment (= the deployment+the service) and use the domain (=myexample.cloud) as the base URL in my axios client in my React client. To confirm, on Azure my ingress-nginx-controller of type Load_Balancer shows the aforementioned public IP as its external IP plus ports 80:30819/TCP,443:31077/TCP.
When I ran the Client locally, it shows the correct request URL (http://myexample.cloud/api/users/signin) but I get a 403 Forbidden answer.
What am I missing? I should be able to connect to my cloud service by using the public IP? There error is caused by my client because Azure is not putting road blocks in place. I mean it is a pubic IP, correct?
Update 1
Just to clarify, the 403 Forbidden is not caused by me trying to sign in with incorrect credentials. I have another api/users/health-ckeck route that is giving me the same error
My cloud ingress deployment. I have also tried to remove the client part (last 7 lines) to no effect.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: myexample.cloud
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
- path: /
pathType: Prefix
backend:
service:
name: client-srv
port:
number: 3000
my client cloud deployment+service that worked when client was running in cloud
apiVersion: apps/v1
kind: Deployment
metadata:
name: client
spec:
replicas: 1
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
containers:
- name: client
image: client
---
apiVersion: v1
kind: Service
metadata:
name: client
spec:
selector:
app: client
ports:
- name: client
protocol: TCP
port: 3000
targetPort: 3000
my auth deployment + service
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: auth
apiVersion: v1
kind: Service
metadata:
name: auth
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
The problem was actually CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin'
but my browser did not tell me.
After switching from Chrome to Firefox, the problem became apperant.
I had to add annotations to my ingress controller as described here: express + socket.io + kubernetes Access-Control-Allow-Origin' header

ImagePullBack pod status in Kubernetes when pulling public image (MS SQL Server Express)

I'm following Les Jackson's tutorial to microservices and got stuck at 05:30:00 while creating a deployment for a ms sql server. I've written the deployment file just as shown on the yt video:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mssql-depl
spec:
replicas: 1
selector:
matchLabels:
app: mssql
template:
metadata:
labels:
app: mssql
spec:
containers:
- name: mssql
image: mcr.microsoft.com/mssql/server:2017-latest
ports:
- containerPort: 1433
env:
- name: MSSQL_PID
value: "Express"
- name: ACCEPT_EULA
value: "Y"
- name: SA_PASSWORD
valueFrom:
secretKeyRef:
name: mssql
key: SA_PASSWORD
volumeMounts:
- mountPath: /var/opt/mssql/data
name: mssqldb
volumes:
- name: mssqldb
persistentVolumeClaim:
claimName: mssql-claim
---
apiVersion: v1
kind: Service
metadata:
name: mssql-clusterip-srv
spec:
type: ClusterIP
selector:
app: mssql
ports:
- name: mssql
protocol: TCP
port: 1433 # this is default port for mssql
targetPort: 1433
---
apiVersion: v1
kind: Service
metadata:
name: mssql-loadbalancer
spec:
type: LoadBalancer
selector:
app: mssql
ports:
- protocol: TCP
port: 1433 # this is default port for mssql
targetPort: 1433
The persistent volume claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mssql-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 200Mi
But when I apply this deployment, the pod ends up with ImagePullBackOff status:
commands-depl-688f77b9c6-vln5v 1/1 Running 0 2d21h
mssql-depl-5cd6d7d486-m8nw6 0/1 ImagePullBackOff 0 4m54s
platforms-depl-6b6cf9b478-ktlhf 1/1 Running 0 2d21h
kubectl describe pod
Name: mssql-depl-5cd6d7d486-nrrkn
Namespace: default
Priority: 0
Node: docker-desktop/192.168.65.4
Start Time: Thu, 28 Jul 2022 12:09:34 +0200
Labels: app=mssql
pod-template-hash=5cd6d7d486
Annotations: <none>
Status: Pending
IP: 10.1.0.27
IPs:
IP: 10.1.0.27
Controlled By: ReplicaSet/mssql-depl-5cd6d7d486
Containers:
mssql:
Container ID:
Image: mcr.microsoft.com/mssql/server:2017-latest
Image ID:
Port: 1433/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
MSSQL_PID: Express
ACCEPT_EULA: Y
SA_PASSWORD: <set to the key 'SA_PASSWORD' in secret 'mssql'> Optional: false
Mounts:
/var/opt/mssql/data from mssqldb (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube- api-access-xqzks (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mssqldb:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mssql-claim
ReadOnly: false
kube-api-access-xqzks:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not- ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m42s default-scheduler Successfully assigned default/mssql-depl-5cd6d7d486-nrrkn to docker-desktop
Warning Failed 102s kubelet Failed to pull image "mcr.microsoft.com/mssql/server:2017-latest": rpc error: code = Unknown desc = context deadline exceeded
Warning Failed 102s kubelet Error: ErrImagePull
Normal BackOff 102s kubelet Back-off pulling image "mcr.microsoft.com/mssql/server:2017-latest"
Warning Failed 102s kubelet Error: ImagePullBackOff
Normal Pulling 87s (x2 over 3m41s) kubelet Pulling image "mcr.microsoft.com/mssql/server:2017-latest"
In the events it shows
"rpc error: code = Unknown desc = context deadline exceeded"
But it doesn't tell me anything and resources on troubleshooting this error don't include such error.
I'm using kubernetes on docker locally.
I've researched that this issue can happen when pulling the image from a private registry, but this is public one, right here. I copy pasted the image path to be sure, I tried with different ms sql version, but to no avail.
Can someone be so kind and show me the right direction I should go / what should I try to get this to work? It worked just fine on the video :(
I fixed it by manually pulling the image via docker pull mcr.microsoft.com/mssql/server:2017-latest and then deleting and re-applying the deployment.
I my case, I needed to pull the image "to minikube" using minikube ssh docker pull <the_image>
Then I can apply my deployment without errors.
Source: https://github.com/kubernetes/minikube/issues/14806

Istio istio-ingressgateway throwing "no cluster match for URL '/'"

I have Istio installed on docker-desktop. In general it works fine. I'm attempting to setup an http-based match on a very simple virtual service, but I'm only able to get 404s. Here are the technical details.
My endpoint image is hashi http-echo which uses the net/http library to create a trivial http server that returns a message you supply. It works just fine and couldn't be more trivial.
Here is my pod and service configuration:
kind: Pod
apiVersion: v1
metadata:
name: a
labels:
app: a
version: v1
spec:
containers:
- name: a
image: hashicorp/http-echo
args:
- "-text='this is service a: v1'"
- "-listen=:6789"
---
kind: Service
apiVersion: v1
metadata:
name: a-service
spec:
selector:
app: a
version: v1
ports:
# Default port used by the image
- port: 6789
targetPort: 6789
name: http-echo
And here is an example of the service working by my curling it from another pod in the same namespace:
/ # curl 10.1.0.29:6789
'this is service a: v1'
And here's the pod running in the docker-desktop cluster:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
a 2/2 Running 0 45h 10.1.0.29 docker-desktop <none> <none>
And here is the service registering and administrating the pod:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
a-service ClusterIP 10.101.113.9 <none> 6789/TCP 45h app=a,version=v1
Here is my istio istio-ingressgateway pod specification via Helm (seems to work fine) which I list as this is the only part of the installation I've changed and the change itself is utterly trivial (just add a single new port block which seems to work fine as listening is indeed occurring):
gateways:
istio-ingressgateway:
name: istio-ingressgateway
labels:
app: istio-ingressgateway
istio: ingressgateway
ports:
- port: 15021
targetPort: 15021
name: status-port
protocol: TCP
- port: 80
targetPort: 8080
name: http2
protocol: TCP
- port: 443
targetPort: 8443
name: https
protocol: TCP
- port: 6789
targetPort: 6789
name: http-echo
protocol: TCP
And here is the kubectl get svc on the istio-ingressgateway just to show that indeed I have an external-ip and things look normal:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
istio-ingressgateway LoadBalancer 10.109.63.15 localhost 15021:30095/TCP,80:32454/TCP,443:31644/TCP,6789:30209/TCP 2d16h app=istio-ingressgateway,istio=ingressgateway
istiod ClusterIP 10.96.155.154 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 2d16h app=istiod,istio=pilot
Here's my virtualservice:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: a-service
namespace: default
spec:
hosts:
- 'a-service.default.svc.cluster.local'
gateways:
- gateway
http:
- match:
- port: 6789
route:
- destination:
host: 'a-service.default.svc.cluster.local'
port:
number: 6789
Here's my gateway:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
namespace: default
spec:
selector:
app: istio-ingressgateway
servers:
- port:
number: 6789
name: http-echo
protocol: http
hosts:
- 'a-service.default.svc.cluster.local'
And then finally here's a debug log from the istio-ingressgateway showing that despite all these seemingly correct pod, service, gateway, virtualservice and ingressgateway configs, the ingressgateway is only return 404s:
2021-09-27T15:34:41.001773Z debug envoy connection [C367] closing data_to_write=143 type=2
2021-09-27T15:34:41.001779Z debug envoy connection [C367] setting delayed close timer with timeout 1000 ms
2021-09-27T15:34:41.001786Z debug envoy pool [C7] response complete
2021-09-27T15:34:41.001791Z debug envoy pool [C7] destroying stream: 0 remaining
2021-09-27T15:34:41.001925Z debug envoy connection [C367] write flush complete
2021-09-27T15:34:41.002215Z debug envoy connection [C367] remote early close
2021-09-27T15:34:41.002279Z debug envoy connection [C367] closing socket: 0
2021-09-27T15:34:41.002348Z debug envoy conn_handler [C367] adding to cleanup list
2021-09-27T15:34:41.179213Z debug envoy conn_handler [C368] new connection from 192.168.65.3:62904
2021-09-27T15:34:41.179594Z debug envoy http [C368] new stream
2021-09-27T15:34:41.179690Z debug envoy http [C368][S14851390862777765658] request headers complete (end_stream=true):
':authority', '0:6789'
':path', '/'
':method', 'GET'
'user-agent', 'curl/7.64.1'
'accept', '*/*'
'version', 'TESTING'
2021-09-27T15:34:41.179708Z debug envoy http [C368][S14851390862777765658] request end stream
2021-09-27T15:34:41.179828Z debug envoy router [C368][S14851390862777765658] no cluster match for URL '/'
2021-09-27T15:34:41.179903Z debug envoy http [C368][S14851390862777765658] Sending local reply with details route_not_found
2021-09-27T15:34:41.179949Z debug envoy http [C368][S14851390862777765658] encoding headers via codec (end_stream=true):
':status', '404'
'date', 'Mon, 27 Sep 2021 15:34:41 GMT'
'server', 'istio-envoy'
Here's istioct proxy-status:
istioctl proxy-status ⎈ docker-desktop/istio-system
NAME CDS LDS EDS RDS ISTIOD VERSION
a.default SYNCED SYNCED SYNCED SYNCED istiod-b9c8c9487-clkkt 1.11.3
istio-ingressgateway-5797689568-x47ck.istio-system SYNCED SYNCED SYNCED SYNCED istiod-b9c8c9487-clkkt 1.11.3
And here's istioctl pc cluster $ingressgateway:
SERVICE FQDN PORT SUBSET DIRECTION TYPE DESTINATION RULE
BlackHoleCluster - - - STATIC
a-service.default.svc.cluster.local 6789 - outbound EDS
agent - - - STATIC
istio-ingressgateway.istio-system.svc.cluster.local 80 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 443 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 6789 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 15021 - outbound EDS
istiod.istio-system.svc.cluster.local 443 - outbound EDS
istiod.istio-system.svc.cluster.local 15010 - outbound EDS
istiod.istio-system.svc.cluster.local 15012 - outbound EDS
istiod.istio-system.svc.cluster.local 15014 - outbound EDS
kube-dns.kube-system.svc.cluster.local 53 - outbound EDS
kube-dns.kube-system.svc.cluster.local 9153 - outbound EDS
kubernetes.default.svc.cluster.local 443 - outbound EDS
prometheus_stats - - - STATIC
sds-grpc - - - STATIC
xds-grpc - - - STATIC
zipkin - - - STRICT_DNS
And here's istioctl pc listeners on the same ingress:
ADDRESS PORT MATCH DESTINATION
0.0.0.0 6789 ALL Route: http.6789
0.0.0.0 15021 ALL Inline Route: /healthz/ready*
0.0.0.0 15090 ALL Inline Route: /stats/prometheus*
And finally here's istioctl routes:
NOTE: This output only contains routes loaded via RDS.
NAME DOMAINS MATCH VIRTUAL SERVICE
http.6789 a-service.default.svc.cluster.local /* a-service.default
* /stats/prometheus*
* /healthz/ready*
I've tried numerous different configurations from changing selectors, to making sure port names match to trying different ports. If I change my virtualservice from http to tcp the port match works great. But because my ultimate goal with this is to do more advanced header-based matching I need to be matching on http. Any insight would be greatly appreciated!
It turned out the problem was that I had specified my service in my hosts directive in both my gateway and virtualservice. Specifying a service as a hosts entry is almost certainly never correct, though one can "workaround" this by adding a host header to curl, i.e. curl ... -H 'Host: kubernetes.docker.internal' .... But the correct solution is to simply add correct host entries, i.e. - mysite.mycompany.com etc. Hosts in this case are like vhosts in Apache; they're an fqdn that resolves to something the mesh and cluster can use to send requests to. host, however, in virtualservice destination is the service, which is a bit convoluted and is what threw me.

Kubernetes Host and Service Ingress Mapping using TCP

While working with Kubernetes for some months now, I found a nice way to use one single existing domain name and expose the cluster-ip through a sub-domain but also most of the microservices through different sub-sub-domains using the ingress controller.
My ingress example code:
kind: Ingress
apiVersion: networking.k8s.io/v1beta1
metadata:
name: cluster-ingress-basic
namespace: ingress-basic
selfLink: >-
/apis/networking.k8s.io/v1beta1/namespaces/ingress-basic/ingresses/cluster-ingress-basic
uid: 5d14e959-db5f-413f-8263-858bacc62fa6
resourceVersion: '42220492'
generation: 29
creationTimestamp: '2021-06-23T12:00:16Z'
annotations:
kubernetes.io/ingress.class: nginx
managedFields:
- manager: Mozilla
operation: Update
apiVersion: networking.k8s.io/v1beta1
time: '2021-06-23T12:00:16Z'
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:annotations':
.: {}
'f:kubernetes.io/ingress.class': {}
'f:spec':
'f:rules': {}
- manager: nginx-ingress-controller
operation: Update
apiVersion: networking.k8s.io/v1beta1
time: '2021-06-23T12:00:45Z'
fieldsType: FieldsV1
fieldsV1:
'f:status':
'f:loadBalancer':
'f:ingress': {}
spec:
rules:
- host: microname1.subdomain.domain.com
http:
paths:
- pathType: ImplementationSpecific
backend:
serviceName: kylin-job-svc
servicePort: 7070
- host: microname2.subdomain.domain.com
http:
paths:
- pathType: ImplementationSpecific
backend:
serviceName: superset
servicePort: 80
- {}
status:
loadBalancer:
ingress:
- ip: xx.xx.xx.xx
With this configuration:
microname1.subdomain.domain.com is pointing into Apache Kylin
microname2.subdomain.domain.com is pointing into Apache Superset
This way all microservices can be exposed using the same Cluster-Load-Balancer(IP) but the different sub-sub domains.
I tried to do the same for the SQL Server but this is not working, not sure why but I have the feeling that the reason is that the SQL Server communicates using TCP and not HTTP.
- host: microname3.subdomain.domain.com
http:
paths:
- pathType: ImplementationSpecific
backend:
serviceName: mssql-linux
servicePort: 1433
Any ideas on how I can do the same for TCP services?
Your understanding is good, by default NGINX Ingress Controller only supports HTTP and HTTPs traffic configuration (Layer 7) so probably your SQL server is not working because of this.
Your SQL service is operating using TCP connections so it is does not take into consideration custom domains that you are trying to setup as they are using same IP address anyway .
The solution for your issue is not use custom sub-domain(s) for this service but to setup exposing TCP service in NGINX Ingress Controller. For example you can setup this SQL service to be available on ingress IP on port 1433:
Ingress controller uses the flags --tcp-services-configmap and --udp-services-configmap to point to an existing config map where the key is the external port to use and the value indicates the service to expose using the format: <namespace/service name>:<service port>:[PROXY]:[PROXY]
To setup it you can follow steps provided in official NGINX Ingress documentation but there are also some more detailed instructions on StackOverflow, for example this one.

Can not connect to SQL Server database hosted on localhost from Kubernetes, how can I debug this?

I am trying to deploy an asp.net core 2.2 application in Kubernetes. This application is a simple web page that need an access to an SQL Server database to display some information. This database is hosted on my local development computer (localhost) and the web application is deployed in a minikube cluster to simulate the production environment where my web application could be deployed in a cloud and access a remote database.
I managed to display my web application by exposing port 80. However, I can't figure out how to make my web application connect to my SQL Server database hosted on my local computer from inside the cluster.
I assume that my connection string is correct since my web application can connect to the SQL Server database when I deploy it on an IIS local server, in a docker container (docker run) or a docker service (docker create service) but not when it is deployed in a Kubernetes cluster. I understand that the cluster is in a different network so I tried to create a service without selector as described in this question, but no luck... I even tried to change the connection string IP address to match the one of the created service but it failed too.
My firewall is setup to accept inbound connection to 1433 port.
My SQL Server database is configured to allow remote access.
Here is the connection string I use:
"Server=172.24.144.1\\MyServer,1433;Database=TestWebapp;User Id=user_name;Password=********;"
And here is the file I use to deploy my web application:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webapp
spec:
replicas: 1
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: <private_repo_url>/webapp:db
imagePullPolicy: Always
ports:
- containerPort: 80
- containerPort: 443
- containerPort: 1433
imagePullSecrets:
- name: gitlab-auth
volumes:
- name: secrets
secret:
secretName: auth-secrets
---
apiVersion: v1
kind: Service
metadata:
name: webapp
labels:
app: webapp
spec:
type: NodePort
selector:
app: webapp
ports:
- name: port-80
port: 80
targetPort: 80
nodePort: 30080
- name: port-443
port: 443
targetPort: 443
nodePort: 30443
---
apiVersion: v1
kind: Service
metadata:
name: sql-server
labels:
app: webapp
spec:
ports:
- name: port-1433
port: 1433
targetPort: 1433
---
apiVersion: v1
kind: Endpoints
metadata:
name: sql-server
labels:
app: webapp
subsets:
- addresses:
- ip: 172.24.144.1 <-- IP of my local computer where SQL Server is running
ports:
- port: 1433
So I get a deployment named 'webapp' with only one pod, two services named 'webapp' and 'sql-server' and two endpoints also named 'webapp' and 'sql-server'. Here are their details:
> kubectl describe svc webapp
Name: webapp
Namespace: default
Labels: app=webapp
Annotations: <none>
Selector: app=webapp
Type: NodePort
IP: 10.108.225.112
Port: port-80 80/TCP
TargetPort: 80/TCP
NodePort: port-80 30080/TCP
Endpoints: 172.17.0.4:80
Port: port-443 443/TCP
TargetPort: 443/TCP
NodePort: port-443 30443/TCP
Endpoints: 172.17.0.4:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
> kubectl describe svc sql-server
Name: sql-server
Namespace: default
Labels: app=webapp
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 10.107.142.32
Port: port-1433 1433/TCP
TargetPort: 1433/TCP
Endpoints:
Session Affinity: None
Events: <none>
> kubectl describe endpoints webapp
Name: webapp
Namespace: default
Labels: app=webapp
Annotations: <none>
Subsets:
Addresses: 172.17.0.4
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
port-443 443 TCP
port-80 80 TCP
Events: <none>
> kubectl describe endpoints sql-server
Name: sql-server
Namespace: default
Labels: app=webapp
Annotations: <none>
Subsets:
Addresses: 172.24.144.1
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
<unset> 1433 TCP
Events: <none>
I am expecting to connect to the SQL Server database but when my application is trying to open the connection I get this error:
SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 40 - Could not open a connection to SQL Server)
I am new with Kubernetes and I am not very comfortable with networking so any help is welcome.
The best help would be to give me some advices/tools to debug this since I don't even know where or when the connection attempt is blocked...
Thank you!
What you consider the IP address of your host is a private IP for an internal network. It is possible that this IP address is the one that your machine uses on the "real" network you are using. The kubernetes virtual network is on a different subnet and thus - the IP that you use internally is not accessible.
subsets:
- addresses:
- ip: 172.24.144.1 <-- IP of my local computer where SQL Server is running
ports:
- port: 1433
You can connect via the DNS entry host.docker.internal
Read more here and here for windows
I am not certain if that works in minicube - there used to be a different DNS name for linux/windows implementations for the host.
If you want to use the IP (bear in mind it would change eventually), you can probably track it down and ensure it is the one "visible" from withing the virtual subnet.
PS : I am using the kubernetes that go on with docker now, seems easier to work with.

Resources