Using ExternalName with Azure SQL DB connectionstring Kubernetes - sql-server

Recently, I have been facing lots of issues with Environmental Variables, More details in this topic here
I have 3 different kubernetes clusters with different SQL hosted on Azure SQL. I want to deploy from one repo into all 3 clusters
Since i could not overwrite values using environmental variables i resulted in using Kubernetes ExternalName and Service Endpoint. However this does not seem to work for Azure SQL DB. Below are the codes working for me
kind: "Service"
apiVersion: "v1"
metadata:
name: "mssql"
namespace: dev
spec:
ports:
-
name: "mssql"
protocol: "TCP"
port: 1433
targetPort: 1433
nodePort: 0
---
kind: "Endpoints"
apiVersion: "v1"
metadata:
name: "mssql"
namespace: dev
subsets:
-
addresses:
-
ip: "192.168.24.10"
ports:
-
port: 1433
name: "mssql"
Here is my connectionstring
{
"ConnectionStrings": {
"DefaultConnection": "Server= mssql; database=TransactionLogs;Persist Security Info=False;user id=sa;password=exTened;MultipleActiveResultSets=True;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;"
},
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft.AspNetCore": "Warning"
}
},
"AllowedHosts": "*",
"TransferPay": "input value",
"TransferPayConn": "input value"
}
I'm able to reach my local hosted databse like this. However since Azure SQL use DNS instead so i resulted in using ExternalName, See code below
apiVersion: v1
kind: Service
metadata:
name: mssqlnew
namespace: app-prod
spec:
externalName: dbinstance.database.windows.net
ports:
- port: 1433
protocol: TCP
targetPort: 1433
sessionAffinity: None
type: ExternalName
I ran nslookup from the DNS util Pod and got this response
root#dnsutils:/# nslookup mssql
Server: 10.0.0.10
Address: 10.0.0.10#53
mssql.a-prod.svc.cluster.local canonical name = dbinstance.database.windows.net.
dbinstance.database.windows.net canonical name = dbinstance.privatelink.database.windows.net.
dbinstance.privatelink.database.windows.net canonical name = dataslice6.eastus.database.windows.net.
dataslice6.eastus.database.windows.net canonical name = dataslice6eastus.trafficmanager.net.
dataslice6eastus.trafficmanager.net canonical name = cr3.eastus1-a.control.database.windows.net.
Name: cr3.eastus1-a.control.database.windows.net
Address: 40.79.153.12
However when i spin up my application i get this error
Cannot open server "mssql" requested by the login. The login failed.
Another Failed Attempt
I tried using Secrets for this also
apiVersion: v1
kind: Secret
metadata:
name: sql-db
namespace: a-prod
data:
server: **
username: **
password: **
database: **
type: Opaque
Deployment.Yaml
env:
- name: Logging__LogLevel__Default
value: Debug
- name: Logging__LogLevel__Microsoft.AspNetCore
value: Debug
- name: username
valueFrom:
secretKeyRef:
name: sql-db
key: username
- name: server
valueFrom:
secretKeyRef:
name: sql-db
key: server
- name: password
valueFrom:
secretKeyRef:
name: sql-db
key: password
- name: database
valueFrom:
secretKeyRef:
name: sql-db
key: database
connection string (appsetting.json)
{
"ConnectionStrings": {
"DefaultConnection": "Server= $server; database=$database;Persist Security Info=False;user id=$username;password=$password;MultipleActiveResultSets=True;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;"
},
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft.AspNetCore": "Warning"
}
},
"AllowedHosts": "*",
"TransferPay": "input value",
"TransferPayConn": "input value"
}
After doing all these, connection to azure db is not active
PS: I need not write the application. However i can share info with the Developer, This was Built on .Net 6
After reading this article https://thewindowsupdate.com/2020/03/09/how-to-use-different-domain-name-to-connect-to-azure-sql-server/ i found out that i need to pass servername into connection string as
Server= mssql; database=TransactionLogs;Persist Security Info=False;user id=servername#sa(username);password=exTened;MultipleActiveResultSets=True;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;"
This works, However i still don't know how to replace this for the multiple Database.

Related

How to route mssql traffic through an Istio egress gateway

I am trying to run a .Net 6 demonstration application in an Istio service mesh (Istio 1.16.1 in an AKS cluster). This application uses a sqlserver 2019 located outside the cluster and I would like to route all outgoing traffic, including mssql, through an egress gateway.
Please note this application also use OpenId Connect and use keytabs (Kerberos traffic), I have successfully managed to route those requests through the egress gateway but not the mssql traffic.
I have created the service mesh with istioctl and the following configuration file
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: default
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
defaultConfig:
tracing:
sampling: 100
outboundTrafficPolicy:
mode: REGISTRY_ONLY
components:
pilot:
k8s:
nodeSelector:
agentpool: svcmaster
ingressGateways:
- name: istio-ingressgateway
enabled: true
k8s:
nodeSelector:
kubernetes.io/os: linux
egressGateways:
- name: istio-egressgateway
enabled: true
k8s:
nodeSelector:
kubernetes.io/os: linux
Here is the ServiceEntry for the database
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: mssql-contoso-com
namespace: linux
spec:
hosts:
- mssql.contoso.com
addresses:
- 10.1.0.5
ports:
- number: 1433
name: mssql
protocol: TLS
- number: 443
name: tls
protocol: TLS
location: MESH_EXTERNAL
resolution: DNS
Here is the gateway (it includes the host
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: egress-gateway
namespace: linux
spec:
selector:
istio: egressgateway
servers:
- port:
number: 443
name: tls
protocol: TLS
hosts:
- "adfs.contoso.com"
- "mssql.contoso.com"
tls:
mode: "PASSTHROUGH"
- port:
number: 80
name: tcp
protocol: TCP
hosts:
- "controller.contoso.com"
And finally, the VirtualService. I have not defined a DestinationRule because it is actually useless, the OIDC and Kerberos traffic are routed correctly without them and I have tried to add it out of desperation without solving the issue.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: "outgoing-mssql-traffic"
namespace: linux
spec:
hosts:
- mssql.contoso.com
gateways:
- egress-gateway
- mesh
tls:
- match:
- gateways:
- mesh
port: 1433
sniHosts:
- mssql.contoso.com
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
port:
number: 443
weight: 100
- match:
- gateways:
- egress-gateway
port: 443
sniHosts:
- mssql.contoso.com
route:
- destination:
host: mssql.contoso.com
port:
number: 1433
weight: 100
Regarding the details of the application call to the SQL Server, I am using a regular SQLConnection with the following connection string:
Server=mssql.contoso.com;Initial Catalog=Demonstration;Integrated Security=true;TrustServerCertificate=true
As a result, I get the following error in the application log:
Microsoft.Data.SqlClient.SqlException (0x80131904): A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: TCP Provider, error: 35 - An internal exception was caught)
---> System.IO.IOException: Unable to read data from the transport connection: Connection reset by peer.
---> System.Net.Sockets.SocketException (104): Connection reset by peer
at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 count)
Somehow the TLS handshake fails. When consulting the logs of the sidecar container and of the egress gateway, I cannot see the traffic to the database. I have also monitored the traffic on the SQLServer machine with Wireshark and I cannot see TCP traffic on port 1433.
The application works fine when the virtual service is deleted so the issue is really related to the routing through the egress gateway.
Any help or insight would be appreciated.
So I found out the reason a bit by accident while trying different configurations and checking the Envoy logs. The Istio Egress gateway service exposes only two ports (at least by default): 80 and 443. It seems that each port can only serve one VirtualService and it is first come first served. I had declared the VirtualService for mssql last, so it was not served.
Furthermore, I had to switch to a TCP route. I failed to get it working with a TLS route.
Here is the ServiceEntry:
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: mssql-contoso-com
namespace: linux
spec:
hosts:
- mssql.contoso.com
addresses:
- 10.1.0.5
ports:
- number: 1433
name: mssql
protocol: TCP
- number: 80
name: tls
protocol: TCP
location: MESH_EXTERNAL
resolution: DNS
And here is the VirtualService:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: "outgoing-mssql-traffic"
namespace: linux
spec:
hosts:
- mssql.contoso.com
gateways:
- egress-gateway
- mesh
tcp:
- match:
- gateways:
- mesh
port: 1433
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
port:
number: 80
weight: 100
- match:
- gateways:
- egress-gateway
port: 80
route:
- destination:
host: mssql.contoso.com
port:
number: 1433
weight: 100
I guess that I will have to either deploy several egress gateways or find a way to add ports to the existing one.

Deploying SQL Server in Kubernetes: context deadline exceeded while pulling mssql image attempt

When I'm trying to run SQL Server in kubernetes with the mcr.microsoft.com/mssql/server image in minikube cluster in several seconds I'm getting the following in logs:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 38h (x77 over 47h) kubelet Pulling image "mcr.microsoft.com/mssql/server"
Normal BackOff 38h (x1658 over 47h) kubelet Back-off pulling image "mcr.microsoft.com/mssql/server"
Warning Failed 38h (x79 over 47h) kubelet Failed to pull image "mcr.microsoft.com/mssql/server": rpc error: code = Unknown desc = context deadline exceeded
Pulling and running the image in docker desktop works fine.
What I've already tried:
Specifying a tag like :2019-latest;
Specifying an imagePullPolicy like IfNotPresent or Never. Seems like even after pulling the image via powershell directly kubernetes doesn't see it locally (but docker does).
I suspect the reason is that the image is too large and kubernetes has too short timeout settings by default. But I'm a newbie with kubernetes and haven't checked this yet. At least, I don't see anything about it in SQL Server examples.
Here's the deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mssql-deployment
namespace: mynamespace
spec:
replicas: 1
selector:
matchLabels:
app: mssql
strategy:
type: Recreate
template:
metadata:
labels:
app: mssql
spec:
terminationGracePeriodSeconds: 10
containers:
- image: mcr.microsoft.com/mssql/server
name: mssql
env:
- name: ACCEPT_EULA
value: "Y"
- name: SA_PASSWORD
valueFrom:
secretKeyRef:
name: mssql
key: SA_PASSWORD
ports:
- containerPort: 1433
name: mssql
securityContext:
privileged: true
volumeMounts:
- name: mssqldb
mountPath: /var/opt/mssql
volumes:
- name: mssqldb
persistentVolumeClaim:
claimName: mysql-pv-claim
service.yaml
apiVersion: v1
kind: Service
metadata:
name: mssql-deployment
namespace: mynamespace
spec:
ports:
- protocol: TCP
port: 1433
targetPort: 1433
selector:
app: mssql
type: LoadBalancer
pvc.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Could you, please, help me figure out what I'm doing wrong? Let me know if you need more details.
Thank you!

ImagePullBack pod status in Kubernetes when pulling public image (MS SQL Server Express)

I'm following Les Jackson's tutorial to microservices and got stuck at 05:30:00 while creating a deployment for a ms sql server. I've written the deployment file just as shown on the yt video:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mssql-depl
spec:
replicas: 1
selector:
matchLabels:
app: mssql
template:
metadata:
labels:
app: mssql
spec:
containers:
- name: mssql
image: mcr.microsoft.com/mssql/server:2017-latest
ports:
- containerPort: 1433
env:
- name: MSSQL_PID
value: "Express"
- name: ACCEPT_EULA
value: "Y"
- name: SA_PASSWORD
valueFrom:
secretKeyRef:
name: mssql
key: SA_PASSWORD
volumeMounts:
- mountPath: /var/opt/mssql/data
name: mssqldb
volumes:
- name: mssqldb
persistentVolumeClaim:
claimName: mssql-claim
---
apiVersion: v1
kind: Service
metadata:
name: mssql-clusterip-srv
spec:
type: ClusterIP
selector:
app: mssql
ports:
- name: mssql
protocol: TCP
port: 1433 # this is default port for mssql
targetPort: 1433
---
apiVersion: v1
kind: Service
metadata:
name: mssql-loadbalancer
spec:
type: LoadBalancer
selector:
app: mssql
ports:
- protocol: TCP
port: 1433 # this is default port for mssql
targetPort: 1433
The persistent volume claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mssql-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 200Mi
But when I apply this deployment, the pod ends up with ImagePullBackOff status:
commands-depl-688f77b9c6-vln5v 1/1 Running 0 2d21h
mssql-depl-5cd6d7d486-m8nw6 0/1 ImagePullBackOff 0 4m54s
platforms-depl-6b6cf9b478-ktlhf 1/1 Running 0 2d21h
kubectl describe pod
Name: mssql-depl-5cd6d7d486-nrrkn
Namespace: default
Priority: 0
Node: docker-desktop/192.168.65.4
Start Time: Thu, 28 Jul 2022 12:09:34 +0200
Labels: app=mssql
pod-template-hash=5cd6d7d486
Annotations: <none>
Status: Pending
IP: 10.1.0.27
IPs:
IP: 10.1.0.27
Controlled By: ReplicaSet/mssql-depl-5cd6d7d486
Containers:
mssql:
Container ID:
Image: mcr.microsoft.com/mssql/server:2017-latest
Image ID:
Port: 1433/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
MSSQL_PID: Express
ACCEPT_EULA: Y
SA_PASSWORD: <set to the key 'SA_PASSWORD' in secret 'mssql'> Optional: false
Mounts:
/var/opt/mssql/data from mssqldb (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube- api-access-xqzks (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mssqldb:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mssql-claim
ReadOnly: false
kube-api-access-xqzks:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not- ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m42s default-scheduler Successfully assigned default/mssql-depl-5cd6d7d486-nrrkn to docker-desktop
Warning Failed 102s kubelet Failed to pull image "mcr.microsoft.com/mssql/server:2017-latest": rpc error: code = Unknown desc = context deadline exceeded
Warning Failed 102s kubelet Error: ErrImagePull
Normal BackOff 102s kubelet Back-off pulling image "mcr.microsoft.com/mssql/server:2017-latest"
Warning Failed 102s kubelet Error: ImagePullBackOff
Normal Pulling 87s (x2 over 3m41s) kubelet Pulling image "mcr.microsoft.com/mssql/server:2017-latest"
In the events it shows
"rpc error: code = Unknown desc = context deadline exceeded"
But it doesn't tell me anything and resources on troubleshooting this error don't include such error.
I'm using kubernetes on docker locally.
I've researched that this issue can happen when pulling the image from a private registry, but this is public one, right here. I copy pasted the image path to be sure, I tried with different ms sql version, but to no avail.
Can someone be so kind and show me the right direction I should go / what should I try to get this to work? It worked just fine on the video :(
I fixed it by manually pulling the image via docker pull mcr.microsoft.com/mssql/server:2017-latest and then deleting and re-applying the deployment.
I my case, I needed to pull the image "to minikube" using minikube ssh docker pull <the_image>
Then I can apply my deployment without errors.
Source: https://github.com/kubernetes/minikube/issues/14806

Connect to local SQL Server Express database from inside minikube cluster

I'm trying to access my SQL Server Express database hosted on my local machine from inside a minikube pod. I tried to follow the idea describe on kubernetes official doc. While I am inspecting my container I found out that my application got crashed every time I am creating POD because the application is unable to connect to the local database.
This is my config:
apiVersion: v1
kind: Service
metadata:
name: sqlserver-svc
spec:
ports:
- protocol: TCP
port: 1443
targetPort: 1433
===========================
apiVersion: v1
kind: Endpoints
metadata:
name: sqlserver-svc
subsets:
- addresses:
- ip: 192.168.0.101
ports:
- port: 1433
======== Application container ==========
apiVersion: v1
kind: Pod # object type -> that will reside kubernetes cluster
metadata:
name: api-pod
labels:
component: web
spec:
containers:
- name: api
image: nayan2/simptekapi
ports:
- containerPort: 5000
env:
- name: DATABASE_HOST
value: "sqlserver-svc:1443\\SQLEXPRESS"
- name: DATABASE_PORT
value: '1433'
- name: DATABASE_USER
value: sa
- name: DATABASE_PW
value: 1234
- name: JWT_SECRET
value: sec
- name: NODE_ENV
value: production
================================
apiVersion: v1
kind: Service
metadata:
name: api-node-port
spec:
type: NodePort
ports:
- port: 4200
targetPort: 5000
nodePort: 31515
selector:
component: web
It is obvious that I am doing something wrong. I am relatively new with docker/container and Kubernetes technology. Still learning. Can anybody help me with this??

Can not connect to SQL Server database hosted on localhost from Kubernetes, how can I debug this?

I am trying to deploy an asp.net core 2.2 application in Kubernetes. This application is a simple web page that need an access to an SQL Server database to display some information. This database is hosted on my local development computer (localhost) and the web application is deployed in a minikube cluster to simulate the production environment where my web application could be deployed in a cloud and access a remote database.
I managed to display my web application by exposing port 80. However, I can't figure out how to make my web application connect to my SQL Server database hosted on my local computer from inside the cluster.
I assume that my connection string is correct since my web application can connect to the SQL Server database when I deploy it on an IIS local server, in a docker container (docker run) or a docker service (docker create service) but not when it is deployed in a Kubernetes cluster. I understand that the cluster is in a different network so I tried to create a service without selector as described in this question, but no luck... I even tried to change the connection string IP address to match the one of the created service but it failed too.
My firewall is setup to accept inbound connection to 1433 port.
My SQL Server database is configured to allow remote access.
Here is the connection string I use:
"Server=172.24.144.1\\MyServer,1433;Database=TestWebapp;User Id=user_name;Password=********;"
And here is the file I use to deploy my web application:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webapp
spec:
replicas: 1
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: <private_repo_url>/webapp:db
imagePullPolicy: Always
ports:
- containerPort: 80
- containerPort: 443
- containerPort: 1433
imagePullSecrets:
- name: gitlab-auth
volumes:
- name: secrets
secret:
secretName: auth-secrets
---
apiVersion: v1
kind: Service
metadata:
name: webapp
labels:
app: webapp
spec:
type: NodePort
selector:
app: webapp
ports:
- name: port-80
port: 80
targetPort: 80
nodePort: 30080
- name: port-443
port: 443
targetPort: 443
nodePort: 30443
---
apiVersion: v1
kind: Service
metadata:
name: sql-server
labels:
app: webapp
spec:
ports:
- name: port-1433
port: 1433
targetPort: 1433
---
apiVersion: v1
kind: Endpoints
metadata:
name: sql-server
labels:
app: webapp
subsets:
- addresses:
- ip: 172.24.144.1 <-- IP of my local computer where SQL Server is running
ports:
- port: 1433
So I get a deployment named 'webapp' with only one pod, two services named 'webapp' and 'sql-server' and two endpoints also named 'webapp' and 'sql-server'. Here are their details:
> kubectl describe svc webapp
Name: webapp
Namespace: default
Labels: app=webapp
Annotations: <none>
Selector: app=webapp
Type: NodePort
IP: 10.108.225.112
Port: port-80 80/TCP
TargetPort: 80/TCP
NodePort: port-80 30080/TCP
Endpoints: 172.17.0.4:80
Port: port-443 443/TCP
TargetPort: 443/TCP
NodePort: port-443 30443/TCP
Endpoints: 172.17.0.4:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
> kubectl describe svc sql-server
Name: sql-server
Namespace: default
Labels: app=webapp
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 10.107.142.32
Port: port-1433 1433/TCP
TargetPort: 1433/TCP
Endpoints:
Session Affinity: None
Events: <none>
> kubectl describe endpoints webapp
Name: webapp
Namespace: default
Labels: app=webapp
Annotations: <none>
Subsets:
Addresses: 172.17.0.4
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
port-443 443 TCP
port-80 80 TCP
Events: <none>
> kubectl describe endpoints sql-server
Name: sql-server
Namespace: default
Labels: app=webapp
Annotations: <none>
Subsets:
Addresses: 172.24.144.1
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
<unset> 1433 TCP
Events: <none>
I am expecting to connect to the SQL Server database but when my application is trying to open the connection I get this error:
SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 40 - Could not open a connection to SQL Server)
I am new with Kubernetes and I am not very comfortable with networking so any help is welcome.
The best help would be to give me some advices/tools to debug this since I don't even know where or when the connection attempt is blocked...
Thank you!
What you consider the IP address of your host is a private IP for an internal network. It is possible that this IP address is the one that your machine uses on the "real" network you are using. The kubernetes virtual network is on a different subnet and thus - the IP that you use internally is not accessible.
subsets:
- addresses:
- ip: 172.24.144.1 <-- IP of my local computer where SQL Server is running
ports:
- port: 1433
You can connect via the DNS entry host.docker.internal
Read more here and here for windows
I am not certain if that works in minicube - there used to be a different DNS name for linux/windows implementations for the host.
If you want to use the IP (bear in mind it would change eventually), you can probably track it down and ensure it is the one "visible" from withing the virtual subnet.
PS : I am using the kubernetes that go on with docker now, seems easier to work with.

Resources