Kubernetes Host and Service Ingress Mapping using TCP - sql-server

While working with Kubernetes for some months now, I found a nice way to use one single existing domain name and expose the cluster-ip through a sub-domain but also most of the microservices through different sub-sub-domains using the ingress controller.
My ingress example code:
kind: Ingress
apiVersion: networking.k8s.io/v1beta1
metadata:
name: cluster-ingress-basic
namespace: ingress-basic
selfLink: >-
/apis/networking.k8s.io/v1beta1/namespaces/ingress-basic/ingresses/cluster-ingress-basic
uid: 5d14e959-db5f-413f-8263-858bacc62fa6
resourceVersion: '42220492'
generation: 29
creationTimestamp: '2021-06-23T12:00:16Z'
annotations:
kubernetes.io/ingress.class: nginx
managedFields:
- manager: Mozilla
operation: Update
apiVersion: networking.k8s.io/v1beta1
time: '2021-06-23T12:00:16Z'
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:annotations':
.: {}
'f:kubernetes.io/ingress.class': {}
'f:spec':
'f:rules': {}
- manager: nginx-ingress-controller
operation: Update
apiVersion: networking.k8s.io/v1beta1
time: '2021-06-23T12:00:45Z'
fieldsType: FieldsV1
fieldsV1:
'f:status':
'f:loadBalancer':
'f:ingress': {}
spec:
rules:
- host: microname1.subdomain.domain.com
http:
paths:
- pathType: ImplementationSpecific
backend:
serviceName: kylin-job-svc
servicePort: 7070
- host: microname2.subdomain.domain.com
http:
paths:
- pathType: ImplementationSpecific
backend:
serviceName: superset
servicePort: 80
- {}
status:
loadBalancer:
ingress:
- ip: xx.xx.xx.xx
With this configuration:
microname1.subdomain.domain.com is pointing into Apache Kylin
microname2.subdomain.domain.com is pointing into Apache Superset
This way all microservices can be exposed using the same Cluster-Load-Balancer(IP) but the different sub-sub domains.
I tried to do the same for the SQL Server but this is not working, not sure why but I have the feeling that the reason is that the SQL Server communicates using TCP and not HTTP.
- host: microname3.subdomain.domain.com
http:
paths:
- pathType: ImplementationSpecific
backend:
serviceName: mssql-linux
servicePort: 1433
Any ideas on how I can do the same for TCP services?

Your understanding is good, by default NGINX Ingress Controller only supports HTTP and HTTPs traffic configuration (Layer 7) so probably your SQL server is not working because of this.
Your SQL service is operating using TCP connections so it is does not take into consideration custom domains that you are trying to setup as they are using same IP address anyway .
The solution for your issue is not use custom sub-domain(s) for this service but to setup exposing TCP service in NGINX Ingress Controller. For example you can setup this SQL service to be available on ingress IP on port 1433:
Ingress controller uses the flags --tcp-services-configmap and --udp-services-configmap to point to an existing config map where the key is the external port to use and the value indicates the service to expose using the format: <namespace/service name>:<service port>:[PROXY]:[PROXY]
To setup it you can follow steps provided in official NGINX Ingress documentation but there are also some more detailed instructions on StackOverflow, for example this one.

Related

Istio istio-ingressgateway throwing "no cluster match for URL '/'"

I have Istio installed on docker-desktop. In general it works fine. I'm attempting to setup an http-based match on a very simple virtual service, but I'm only able to get 404s. Here are the technical details.
My endpoint image is hashi http-echo which uses the net/http library to create a trivial http server that returns a message you supply. It works just fine and couldn't be more trivial.
Here is my pod and service configuration:
kind: Pod
apiVersion: v1
metadata:
name: a
labels:
app: a
version: v1
spec:
containers:
- name: a
image: hashicorp/http-echo
args:
- "-text='this is service a: v1'"
- "-listen=:6789"
---
kind: Service
apiVersion: v1
metadata:
name: a-service
spec:
selector:
app: a
version: v1
ports:
# Default port used by the image
- port: 6789
targetPort: 6789
name: http-echo
And here is an example of the service working by my curling it from another pod in the same namespace:
/ # curl 10.1.0.29:6789
'this is service a: v1'
And here's the pod running in the docker-desktop cluster:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
a 2/2 Running 0 45h 10.1.0.29 docker-desktop <none> <none>
And here is the service registering and administrating the pod:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
a-service ClusterIP 10.101.113.9 <none> 6789/TCP 45h app=a,version=v1
Here is my istio istio-ingressgateway pod specification via Helm (seems to work fine) which I list as this is the only part of the installation I've changed and the change itself is utterly trivial (just add a single new port block which seems to work fine as listening is indeed occurring):
gateways:
istio-ingressgateway:
name: istio-ingressgateway
labels:
app: istio-ingressgateway
istio: ingressgateway
ports:
- port: 15021
targetPort: 15021
name: status-port
protocol: TCP
- port: 80
targetPort: 8080
name: http2
protocol: TCP
- port: 443
targetPort: 8443
name: https
protocol: TCP
- port: 6789
targetPort: 6789
name: http-echo
protocol: TCP
And here is the kubectl get svc on the istio-ingressgateway just to show that indeed I have an external-ip and things look normal:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
istio-ingressgateway LoadBalancer 10.109.63.15 localhost 15021:30095/TCP,80:32454/TCP,443:31644/TCP,6789:30209/TCP 2d16h app=istio-ingressgateway,istio=ingressgateway
istiod ClusterIP 10.96.155.154 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 2d16h app=istiod,istio=pilot
Here's my virtualservice:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: a-service
namespace: default
spec:
hosts:
- 'a-service.default.svc.cluster.local'
gateways:
- gateway
http:
- match:
- port: 6789
route:
- destination:
host: 'a-service.default.svc.cluster.local'
port:
number: 6789
Here's my gateway:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
namespace: default
spec:
selector:
app: istio-ingressgateway
servers:
- port:
number: 6789
name: http-echo
protocol: http
hosts:
- 'a-service.default.svc.cluster.local'
And then finally here's a debug log from the istio-ingressgateway showing that despite all these seemingly correct pod, service, gateway, virtualservice and ingressgateway configs, the ingressgateway is only return 404s:
2021-09-27T15:34:41.001773Z debug envoy connection [C367] closing data_to_write=143 type=2
2021-09-27T15:34:41.001779Z debug envoy connection [C367] setting delayed close timer with timeout 1000 ms
2021-09-27T15:34:41.001786Z debug envoy pool [C7] response complete
2021-09-27T15:34:41.001791Z debug envoy pool [C7] destroying stream: 0 remaining
2021-09-27T15:34:41.001925Z debug envoy connection [C367] write flush complete
2021-09-27T15:34:41.002215Z debug envoy connection [C367] remote early close
2021-09-27T15:34:41.002279Z debug envoy connection [C367] closing socket: 0
2021-09-27T15:34:41.002348Z debug envoy conn_handler [C367] adding to cleanup list
2021-09-27T15:34:41.179213Z debug envoy conn_handler [C368] new connection from 192.168.65.3:62904
2021-09-27T15:34:41.179594Z debug envoy http [C368] new stream
2021-09-27T15:34:41.179690Z debug envoy http [C368][S14851390862777765658] request headers complete (end_stream=true):
':authority', '0:6789'
':path', '/'
':method', 'GET'
'user-agent', 'curl/7.64.1'
'accept', '*/*'
'version', 'TESTING'
2021-09-27T15:34:41.179708Z debug envoy http [C368][S14851390862777765658] request end stream
2021-09-27T15:34:41.179828Z debug envoy router [C368][S14851390862777765658] no cluster match for URL '/'
2021-09-27T15:34:41.179903Z debug envoy http [C368][S14851390862777765658] Sending local reply with details route_not_found
2021-09-27T15:34:41.179949Z debug envoy http [C368][S14851390862777765658] encoding headers via codec (end_stream=true):
':status', '404'
'date', 'Mon, 27 Sep 2021 15:34:41 GMT'
'server', 'istio-envoy'
Here's istioct proxy-status:
istioctl proxy-status ⎈ docker-desktop/istio-system
NAME CDS LDS EDS RDS ISTIOD VERSION
a.default SYNCED SYNCED SYNCED SYNCED istiod-b9c8c9487-clkkt 1.11.3
istio-ingressgateway-5797689568-x47ck.istio-system SYNCED SYNCED SYNCED SYNCED istiod-b9c8c9487-clkkt 1.11.3
And here's istioctl pc cluster $ingressgateway:
SERVICE FQDN PORT SUBSET DIRECTION TYPE DESTINATION RULE
BlackHoleCluster - - - STATIC
a-service.default.svc.cluster.local 6789 - outbound EDS
agent - - - STATIC
istio-ingressgateway.istio-system.svc.cluster.local 80 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 443 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 6789 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 15021 - outbound EDS
istiod.istio-system.svc.cluster.local 443 - outbound EDS
istiod.istio-system.svc.cluster.local 15010 - outbound EDS
istiod.istio-system.svc.cluster.local 15012 - outbound EDS
istiod.istio-system.svc.cluster.local 15014 - outbound EDS
kube-dns.kube-system.svc.cluster.local 53 - outbound EDS
kube-dns.kube-system.svc.cluster.local 9153 - outbound EDS
kubernetes.default.svc.cluster.local 443 - outbound EDS
prometheus_stats - - - STATIC
sds-grpc - - - STATIC
xds-grpc - - - STATIC
zipkin - - - STRICT_DNS
And here's istioctl pc listeners on the same ingress:
ADDRESS PORT MATCH DESTINATION
0.0.0.0 6789 ALL Route: http.6789
0.0.0.0 15021 ALL Inline Route: /healthz/ready*
0.0.0.0 15090 ALL Inline Route: /stats/prometheus*
And finally here's istioctl routes:
NOTE: This output only contains routes loaded via RDS.
NAME DOMAINS MATCH VIRTUAL SERVICE
http.6789 a-service.default.svc.cluster.local /* a-service.default
* /stats/prometheus*
* /healthz/ready*
I've tried numerous different configurations from changing selectors, to making sure port names match to trying different ports. If I change my virtualservice from http to tcp the port match works great. But because my ultimate goal with this is to do more advanced header-based matching I need to be matching on http. Any insight would be greatly appreciated!
It turned out the problem was that I had specified my service in my hosts directive in both my gateway and virtualservice. Specifying a service as a hosts entry is almost certainly never correct, though one can "workaround" this by adding a host header to curl, i.e. curl ... -H 'Host: kubernetes.docker.internal' .... But the correct solution is to simply add correct host entries, i.e. - mysite.mycompany.com etc. Hosts in this case are like vhosts in Apache; they're an fqdn that resolves to something the mesh and cluster can use to send requests to. host, however, in virtualservice destination is the service, which is a bit convoluted and is what threw me.

Configuring k8s nginx ingress to route React SPA and backend apis

My backend services is working great with ingress nginx.
I'm trying without success to add a frontend SPA react app to my ingress.
I did manage to get it work but I can't get both my backend AND front end to works.
My ingress yml is
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
nginx.ingress.kubernetes.io/rewrite-target: /$2
#nginx.ingress.kubernetes.io/add-base-url: "true"
spec:
rules:
- host: accounting.easydeal.dev
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-srv
port:
number: 3000
- host: api.easydeal.dev
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: docker-hello-world-svc
port:
number: 8088
- path: /accounting(/|$)(.*)
pathType: Prefix
backend:
service:
name: accounting-srv
port:
number: 80
- path: /company(/|$)(.*)
pathType: Prefix
backend:
service:
name: dealers-srv
port:
number: 80
With this yml above i'm able to poke my backend like so -> api.easydeal.dev/helloword or
api.easydeal.dev/company/* and it work !.
However my react app (accounting.easydeal.dev) end up with a white page a console log with this error ->
Uncaught SyntaxError: Unexpected token '<'
The only way i'm able to make my react app work is to change rewrite-target: /$2 to / . However doing so prevent to route correctly my other apis.
I did set the homepage for the react app to "." but still have the error and I also try to set path to /?(*) for my front end
here is my dockerfile
# pull the base image
FROM node:alpine
# set the working direction
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
RUN npm install
COPY . ./
EXPOSE 3000
CMD ["npm", "start"]
As pointed in the comments by original poster:
Doing 2 ingress services sold this issue.
The solution to this issue was to create 2 separate Ingress resources.
The underlying issue was that the workload required 2 different nginx.ingress.kubernetes.io/rewrite-target: parameters.
Above annotations can be set per Ingress resource and not per path.
You can create 2 Ingress resources that will be separate entities (will have different annotations) and they will work "together".
More reference can be found in the links below:
Stackoverflow.com: Answer: Apply nginx-ingress annotations at path level
Kubernetes.github.io: Ingress nginx: User guide: Basic usage
Being specific to nginx-ingress:
By default when you provision/deploy NGINX Ingress controller you are telling your Kubernetes cluster to create Service of type LoadBalancer. This Service will requests the IP address from the cloud provider (GKE, EKS, AKS) and will route the traffic from this IP to your Ingress controller where the requests will be evaluated and send further according to your Ingress resources definitions.
A side note!
By default was not used without a reason as there are other methods to expose your Ingress controller to the external traffic. You can read more about it by following below link:
Kubernetes.github.io: Ingress nginx: Deploy: Baremetal
Your Ingress controller will have single IP address to expose your workload:
$ kubectl get service -n ingress-nginx ingress-nginx-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.32.6.63 AA.BB.CC.DD 80:30828/TCP,443:30664/TCP 19m
Ingress resource that are using kubernetes.io/ingress.class: "nginx" will use that address.
Ingress resources created in this way will look like following when issuing:
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
goodbye-ingress goodbye.domain.name AA.BB.CC.DD 80 19m
hello-ingress hello.domain.name AA.BB.CC.DD 80 19m
A second side note!
If you are using a managed Kubernetes cluster, please refer to it's documentation for more reference on using Ingress resources as there could be major differences.

Frontend can't resolve backend name within k8s cluster

I'm trying to deploy a simple Angular/Express app on GKE and the http requests from the frontend can't find the name of the express app.
Here's an example of one get requests. I changed the request from 'localhost' to 'express' because that is the name of the clusterIP service setup in the cluster. Also, I'm able to curl this url from the angular pod and get json returned as expected.
getPups(){
this.http.get<{message:string, pups: any}>("http://express:3000/pups")
.pipe(map((pupData)=>{
return pupData.pups.map(pup=>{
return{
name: pup.name,
breed: pup.breed,
quote: pup.quote,
id: pup._id,
imagePath: pup.imagePath,
rates: pup.rates
}
});
}))
.subscribe((transformedPups)=>{
this.pups = transformedPups
this.pupsUpdated.next([...this.pups])
});
}
Here's the angular deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: puprate-deployment
spec:
replicas: 1
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
containers:
- name: puprate
image: brandonjones085/puprate
ports:
- containerPort: 4200
---
apiVersion: v1
kind: Service
metadata:
name: puprate-cluster-ip-service
spec:
type: ClusterIP
selector:
component: web
ports:
- port: 4200
targetPort: 4200
And the express deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: express
spec:
replicas: 3
selector:
matchLabels:
component: server
template:
metadata:
labels:
component: server
spec:
containers:
- name: server
image: brandonjones085/puprate-express
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: express
spec:
type: ClusterIP
selector:
component: server
ports:
- port: 3000
targetPort: 3000
Your frontend app is making the call from outside your cluster, and therefor needs a way to reach it. Because you are serving http, the best way to set that up will be with an ingress.
First, make sure you have an ingress controller set up in your cluster ( e.g. nginx ingress controller) https://kubernetes.github.io/ingress-nginx/deploy/#gce-gke
Then, set up your express with a service (from your question, I see you already have that set up on port 3000, that's good, though in the service I recommend to change the port to 80 - though not critical)
With that, set up your ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: express
spec:
rules:
- host: <a domain you own>
http:
paths:
# NOTICE!! have you express app listen for that path, or set up nginx rewrite rules (I recommend the former, it's much easier to understand)
- path: /api
backend:
serviceName: express
servicePort: 3000 # or 80 if you decide to change that
Do the same for your web deployment, so you can serve your frontend directly:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: web
spec:
rules:
- host: <a domain you own>
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 4200 # or 80 if you decide to change that
Notice that both ingresses are using the same host but different paths - that's important for what's coming next
in your angular app, change that:
this.http.get<{message:string, pups: any}>("http://express:3000/pups")
to that:
this.http.get<{message:string, pups: any}>("/api/pups")
Browsers will parse that to <domain in your address bar>/api/pups
Since you are using GKE, when you set up the ingress controller you will generate a load balancer in the google cloud - make sure that has a DNS entry that directs there.
I'm assuming you already own a domain, but if you don't yet, just add the ip you got to your personal hosts file until you get one like so:
<ip of load balancer> <domain you want>
# for example
45.210.10.15 awesome-domain.com
So now, use the browser to go to the domain you own - you should get the frontend served to you - and since you are calling your api with an address that starts with /, your api call will go to the same host, and redirected by your ingress to your express app this time, instead of the frontend server.
Angular is running in your browser, not in the pod inside the cluster.
The requests will originate therefore externally and the URL must point to the Ingress or LoadBalancer of your backend service.

Kubernetes(minikube) + React Frontend + .netcore api + Cluster IP service + ingress + net::ERR_NAME_NOT_RESOLVED

Not able to resolve an API hosted as a ClusterIP service on Minikube when calling from the React JS frontend.
The basic architecture of my application is as follows
React --> .NET core API
Both these components are hosted as ClusterIP services. I have created an ingress service with http paths pointing to React component and the .NET core API.
However when I try calling it from the browser, react application renders, but the call to the API fails with
net::ERR_NAME_NOT_RESOLVED
Below are the .yml files for
1. React application
apiVersion: v1
kind: Service
metadata:
name: frontend-clusterip
spec:
type: ClusterIP
ports:
- port: 59000
targetPort: 3000
selector:
app: frontend
2. .NET core API
apiVersion: v1
kind: Service
metadata:
name: backend-svc-nodeport
spec:
type: ClusterIP
selector:
app: backend-svc
ports:
- port: 5901
targetPort: 59001
3. ingress service
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /?(.*)
backend:
serviceName: frontend-clusterip
servicePort: 59000
- path: /api/?(.*)
backend:
serviceName: backend-svc-nodeport
servicePort: 5901
4. frontend deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
selector:
matchLabels:
app: frontend
replicas: 1
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: upendra409/tasks_tasks.frontend
ports:
- containerPort: 3000
env:
- name: "REACT_APP_ENVIRONMENT"
value: "Kubernetes"
- name: "REACT_APP_BACKEND"
value: "http://backend-svc-nodeport"
- name: "REACT_APP_BACKENDPORT"
value: "5901"
This is the error I get in the browser:
xhr.js:166 GET
http://backend-svc-nodeport:5901/api/tasks net::ERR_NAME_NOT_RESOLVED
I installed curl in the frontend container to get in the frontend pod to try to connect the backend API using the above URL, but the command doesn't work
C:\test\tasks [develop ≡ +1 ~6 -0 !]> kubectl exec -it frontend-655776bc6d-nlj7z --curl http://backend-svc-nodeport:5901/api/tasks
Error: unknown flag: --curl
You are getting this error from local machine because ClusterIP service is wrong type for accessing from outside of the cluster. As mentioned in kubernetes documentation ClusterIP is only reachable from within the cluster.
Publishing Services (ServiceTypes)
For some parts of your application (for example, frontends) you may
want to expose a Service onto an external IP address, that’s outside
of your cluster.
Kubernetes ServiceTypes allow you to specify what kind of Service
you want. The default is ClusterIP.
Type values and their behaviors are:
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the
cluster. This is the default ServiceType.
NodePort:
Exposes the Service on each Node’s IP at a static port (the
NodePort). A ClusterIP Service, to which the NodePort
Service routes, is automatically created. You’ll be able to contact
the NodePort Service, from outside the cluster, by requesting
<NodeIP>:<NodePort>.
LoadBalancer:
Exposes the Service externally using a cloud provider’s load balancer.
NodePort and ClusterIP Services, to which the external load
balancer routes, are automatically created.
ExternalName:
Maps the Service to the contents of the externalName field (e.g.
foo.bar.example.com), by returning a CNAME record
with its value. No proxying of any kind is set up.
Note: You need CoreDNS version 1.7 or higher to use the ExternalName type.
I suggest using NodePort or LoadBalancer service type instead.
Refer to above documentation links for examples.

Can not connect to SQL Server database hosted on localhost from Kubernetes, how can I debug this?

I am trying to deploy an asp.net core 2.2 application in Kubernetes. This application is a simple web page that need an access to an SQL Server database to display some information. This database is hosted on my local development computer (localhost) and the web application is deployed in a minikube cluster to simulate the production environment where my web application could be deployed in a cloud and access a remote database.
I managed to display my web application by exposing port 80. However, I can't figure out how to make my web application connect to my SQL Server database hosted on my local computer from inside the cluster.
I assume that my connection string is correct since my web application can connect to the SQL Server database when I deploy it on an IIS local server, in a docker container (docker run) or a docker service (docker create service) but not when it is deployed in a Kubernetes cluster. I understand that the cluster is in a different network so I tried to create a service without selector as described in this question, but no luck... I even tried to change the connection string IP address to match the one of the created service but it failed too.
My firewall is setup to accept inbound connection to 1433 port.
My SQL Server database is configured to allow remote access.
Here is the connection string I use:
"Server=172.24.144.1\\MyServer,1433;Database=TestWebapp;User Id=user_name;Password=********;"
And here is the file I use to deploy my web application:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webapp
spec:
replicas: 1
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: <private_repo_url>/webapp:db
imagePullPolicy: Always
ports:
- containerPort: 80
- containerPort: 443
- containerPort: 1433
imagePullSecrets:
- name: gitlab-auth
volumes:
- name: secrets
secret:
secretName: auth-secrets
---
apiVersion: v1
kind: Service
metadata:
name: webapp
labels:
app: webapp
spec:
type: NodePort
selector:
app: webapp
ports:
- name: port-80
port: 80
targetPort: 80
nodePort: 30080
- name: port-443
port: 443
targetPort: 443
nodePort: 30443
---
apiVersion: v1
kind: Service
metadata:
name: sql-server
labels:
app: webapp
spec:
ports:
- name: port-1433
port: 1433
targetPort: 1433
---
apiVersion: v1
kind: Endpoints
metadata:
name: sql-server
labels:
app: webapp
subsets:
- addresses:
- ip: 172.24.144.1 <-- IP of my local computer where SQL Server is running
ports:
- port: 1433
So I get a deployment named 'webapp' with only one pod, two services named 'webapp' and 'sql-server' and two endpoints also named 'webapp' and 'sql-server'. Here are their details:
> kubectl describe svc webapp
Name: webapp
Namespace: default
Labels: app=webapp
Annotations: <none>
Selector: app=webapp
Type: NodePort
IP: 10.108.225.112
Port: port-80 80/TCP
TargetPort: 80/TCP
NodePort: port-80 30080/TCP
Endpoints: 172.17.0.4:80
Port: port-443 443/TCP
TargetPort: 443/TCP
NodePort: port-443 30443/TCP
Endpoints: 172.17.0.4:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
> kubectl describe svc sql-server
Name: sql-server
Namespace: default
Labels: app=webapp
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 10.107.142.32
Port: port-1433 1433/TCP
TargetPort: 1433/TCP
Endpoints:
Session Affinity: None
Events: <none>
> kubectl describe endpoints webapp
Name: webapp
Namespace: default
Labels: app=webapp
Annotations: <none>
Subsets:
Addresses: 172.17.0.4
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
port-443 443 TCP
port-80 80 TCP
Events: <none>
> kubectl describe endpoints sql-server
Name: sql-server
Namespace: default
Labels: app=webapp
Annotations: <none>
Subsets:
Addresses: 172.24.144.1
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
<unset> 1433 TCP
Events: <none>
I am expecting to connect to the SQL Server database but when my application is trying to open the connection I get this error:
SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 40 - Could not open a connection to SQL Server)
I am new with Kubernetes and I am not very comfortable with networking so any help is welcome.
The best help would be to give me some advices/tools to debug this since I don't even know where or when the connection attempt is blocked...
Thank you!
What you consider the IP address of your host is a private IP for an internal network. It is possible that this IP address is the one that your machine uses on the "real" network you are using. The kubernetes virtual network is on a different subnet and thus - the IP that you use internally is not accessible.
subsets:
- addresses:
- ip: 172.24.144.1 <-- IP of my local computer where SQL Server is running
ports:
- port: 1433
You can connect via the DNS entry host.docker.internal
Read more here and here for windows
I am not certain if that works in minicube - there used to be a different DNS name for linux/windows implementations for the host.
If you want to use the IP (bear in mind it would change eventually), you can probably track it down and ensure it is the one "visible" from withing the virtual subnet.
PS : I am using the kubernetes that go on with docker now, seems easier to work with.

Resources