How to connect a Kubernetes cluster to and external SQL Server - sql-server

I have a running Kubernetes cluster running multiple nodes. I want to connect my application (built on .Net) to an external SQL Server.
My connection string Looks like this
"ConnectionStrings": {
"DefaultConnection": "Server= 192.168.2.68; database=ArcadiaAuthenServiceDB;user id=pbts;password=exTened;MultipleActiveResultSets=True;"
},
I was able to connect to DB Environment by whitelisting each node IP to allow connection to the external SQL Server.
I can successfully telnet DB IP and port.
I added this Server= 192.168.2.68; database=ArcadiaAuthenServiceDB;user id=pbts;password=exTened;MultipleActiveResultSets=True; to configmap on K8s
In my `service.YAML deployment I added the configmap as an environmental variable. And was able to connect.
Instead of using configmap I'd love to use secrets, how can I achieve this?
However, I know this connection isn't secured enough. What best practice can I use to achieve this connectivity.
How can I ensure that I do not have to always whitelist node IP if I choose to increase k8s nodes?
apiVersion: v1
kind: ConfigMap
metadata:
name: mssql-config
namespace: dev
data:
database_uri: Server= 192.168.2.68; database=ArcadiaAuthenServiceDB;user id=pbts;password=exTened;MultipleActiveResultSets=True;
Environmental Variables
env:
- name: Logging__LogLevel__Default
value: Debug
- name: Logging__LogLevel__Microsoft.AspNetCore
value: Debug
- name: ConnectionStrings__DefaultConnection
valueFrom:
configMapKeyRef:
name: mssql-config
key: database_uri

Related

What are the correct /etc/exports settings for Kubernetes NFS Storage?

I have a simple NFS server (followed instructions here) connected to a Kubernetes (v1.24.2) cluster as a storage class. When a new PVC is created, it creates a PV as expected with a new directory on the NFS server.
The NFS provider was deployed as instructed here.
My issue is that containers don't seem to be able to perform all the functions they expect to when interacting with the NFS server. For example:
A PVC and PV are created with the following yml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mssql-data
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
This creates a directory on the NFS server as expected.
Then this deployment is created to use the PVC:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mssql-deployment
spec:
replicas: 1
selector:
matchLabels:
app: mssql
template:
metadata:
labels:
app: mssql
spec:
terminationGracePeriodSeconds: 30
hostname: mssqlinst
securityContext:
fsGroup: 10001
containers:
- name: mssql
image: mcr.microsoft.com/mssql/server:2019-latest
ports:
- containerPort: 1433
env:
- name: MSSQL_PID
value: "Developer"
- name: ACCEPT_EULA
value: "Y"
- name: SA_PASSWORD
value: "Password123"
volumeMounts:
- name: mssqldb
mountPath: /var/opt/mssql
volumes:
- name: mssqldb
persistentVolumeClaim:
claimName: mssql-data
The server comes up and responds to requests but does so with the error:
[S0002][823] com.microsoft.sqlserver.jdbc.SQLServerException: The operating system returned error 1117(The request could not be performed because of an I/O device error.) to SQL Server during a read at offset 0x0000000009a000 in file '/var/opt/mssql/data/master.mdf'. Additional messages in the SQL Server error log and operating system error log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.
My /etc/exports file has the following contents:
/srv *(rw,no_subtree_check,no_root_squash)
When the SQL container starts, it doesn't undergo any container restarts but the SQL service within the container appears to get into some sort of restart loop until a connection is attempted and then it throws the error and appears to stop.
Is there something I'm missing in the /etc/exports file? I tried variations with sync, async, and insecure but can't seem to get past the SQL error.
I gather from the error that this has something to do with the container's ability to read/write from/to the disk. Am I in the right ballpark?
The config that ended up working was:
/srv *(rw,no_root_squash,insecure,sync,no_subtree_check)
This was after a reinstall of the cluster. No significant changes elsewhere but still seems like there may have been more to the issue than this one config.

Minikube Pod Connect to External Database

I have created a sample application which needs be run inside a kubernetes cluster. For now I am trying to replicate the same environment in my local machine by using Minikube.
Here I have a .Net Core WebAPI service which need to be connect to a MSSQL database. The service is running inside the cluster my the database is in my local machine. I have also created a service to access my MSSQL database engine which at outside the cluster but in my local machine.
Here is my configuration file.
My local ip address is 192.168.8.100.
apiVersion: v1
kind: Service
metadata:
name: mssql
spec:
ports:
- protocol: TCP
port: 3050
targetPort: 1433
apiVersion: v1
kind: Endpoints
metadata:
name: mssql
subsets:
- addresses:
- ip: "192.168.8.100"
ports:
- port: 1433
Pod Connection String
Server=mssql\MSSQLSERVER2017,3050;Database=product-db;User Id=sa;Password=pwd#123;MultipleActiveResultSets=true;
But with the above configuration, it doesn't seems working and throw a connection error.
Can someone please tell me where I am doing wrong.
Thanks in advance.

Using NodePort to connect to Database

Situation:
I work at big company and we have a k8s cluster.
We also have a database that is hosted somewhere else outside of the cluster.
The database IP-Adress and the Cluster have a bi-directional FW Clearance.
So applications that are hosted inside the cluster can connect to the database.
My machine does not have any clearance to the Database. I cannot test my app, write queries and so on. That slows me down and forces me to go to work, if any database-operations are required for it.
Question:
Since I can connect and deploy on the cluster. Could I deploy a NodePort/Service/etc to a service which forwards it directly to the database?
In this way it would "fool" the Database that the request comes from the cluster. But instead it comes from my machine at home.
Has anybody tried something like that?
Thanks in advance
you won't be able to set up a proxy that way. If you have an application that receives requests and forwards them to your database.
An easier solution would be to deploy your app to the cluster (if possible) or deploy a test pod to the cluster (using a base image like debian, busybox, or alpine linux; which ever image serves your needs). You can then connect to the pod (kubectl exec -it podname).
You could try to use NodePort service without selector and define your own endpoint pointing to database IP address.
For instance:
vagrant#k8sMaster:~$ nslookup google.fr
Name: google.fr
Address: 216.58.207.131
echo '
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
ports:
- protocol: TCP
port: 443
targetPort: 443
' >> svc-no-sel.yaml
echo '
apiVersion: v1
kind: Endpoints
metadata:
name: my-service
subsets:
- addresses:
- ip: 216.58.207.131
ports:
- port: 443
' >> ep-no-sel.yaml
k apply -f svc-no-sel.yaml
k apply -f ep-no-sel.yaml
Where you replace google IP/Port by your database IP/Port.
Then in the given example you can target the service by doing
curl -k https://<node-ip>:<node-port>
Documentation on service without selector here: https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors

How do I connect a kubernetes cluster to an external SQL Server database using docker desktop?

I need to know how to connect my Kubernetes cluster to an external SQL Server database running in a docker image outside of the Kubernetes cluster.
I currently have two pods in my cluster that are running, each has a different image in it created from asp.net core applications. There is a completely separate (outside of Kubernetes but running locally on my machine localhost,1433) docker image that hosts a SQL Server database. I need the applications in my Kubernetes pods to be able to reach and manipulate that database. I have tried creating a YAML file and configuring different ports but I do not know how to get this working, or how to test that it actually is working after setting it up. I need the exact steps/commands to create a service capable of routing a connection from the images in my cluster to the DB and back.
Docker SQL Server creation (elevated powershell/docker desktop):
docker pull mcr.microsoft.com/mssql/server:2017-latest
docker run -d -p 1433:1433 --name sql -v "c:/Temp/DockerShared:/host_mount" -e SA_PASSWORD="aPasswordPassword" -e ACCEPT_EULA=Y mcr.microsoft.com/mssql/server:2017-latest
definitions.yaml
#Pods in the cluster
apiVersion: v1
kind: Pod
metadata:
name: pod-1
labels:
app: podnet
type: module
spec:
containers:
- name: container1
image: username/image1
---
apiVersion: v1
kind: Pod
metadata:
name: pod-2
labels:
app: podnet
type: module
spec:
containers:
- name: container2
image: username/image2
---
#Service created in an attempt to contact external SQL Server DB
apiVersion: v1
kind: Service
metadata:
name: ext-sql-service
spec:
ports:
- port: 1433
targetPort: 1433
type: ClusterIP
---
apiVersion: v1
kind: Endpoints
metadata:
name: ext-sql-service
subsets:
- addresses:
- ip: (Docker IP for DB Instance)
ports:
- port: 1433
Ideally I would like applications in my kubernetes cluster to be able to manipulate the SQL Server I already have set up (running outside of the cluster but locally on my machine).
When running from local docker, you connection string is NOT your local machine.
It is the local docker "world", that happens to be running on your machine.
host.docker.internal:1433
The above is docker container talking to your local machine. Obviously, the port could be different based on how you exposed it.
......
If you're trying to get your running container to talk to sql-server which is ALSO running inside of the docker world, that connection string looks like:
ServerName:
my-mssql-service-deployment-name.$_CUSTOMNAMESPACENAME.svc.cluster.local
Where $_CUSTOMNAMESPACENAME is probably "default", but you may be running a different namespace.
my-mssql-service-deployment-name is the name of YOUR deployment (I have it stubbed here)
Note there is no port number here.
This is documented here:
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services
Problem may be in kind of service you put. ClusterIP enable you juest to connect among pods inside cluster.
To connect to external service you should just change definition of service kind as NodePort.
Try to change service definition:
#Service created in an attempt to contact external SQL Server DB
apiVersion: v1
kind: Service
metadata:
name: ext-sql-service
spec:
type: NodePort
ports:
- port: 1433
targetPort: 1433
and execute command:
$ kubectl apply -f your_service_definition_file_name.yaml
Remember to run this command in proper namespace, where your deployment is configured.
Bad practice is to overlay an environment variable onto the container. And with "docker run" pass that environment variable VALUE to the container.
Of course in context of executing docker command
$ docker run -d -p 1433:1433 --name sql -v "c:/Temp/DockerShared:/host_mount" -e SA_PASSWORD="aPasswordPassword" -e ACCEPT_EULA=Y mcr.microsoft.com/mssql/server:2017-latest
Putting the db-password visible is insecure. Use Kubernetes secrets.
More information you can find here: kubernetes-secret.

Can not connect to SQL Server database hosted on localhost from Kubernetes, how can I debug this?

I am trying to deploy an asp.net core 2.2 application in Kubernetes. This application is a simple web page that need an access to an SQL Server database to display some information. This database is hosted on my local development computer (localhost) and the web application is deployed in a minikube cluster to simulate the production environment where my web application could be deployed in a cloud and access a remote database.
I managed to display my web application by exposing port 80. However, I can't figure out how to make my web application connect to my SQL Server database hosted on my local computer from inside the cluster.
I assume that my connection string is correct since my web application can connect to the SQL Server database when I deploy it on an IIS local server, in a docker container (docker run) or a docker service (docker create service) but not when it is deployed in a Kubernetes cluster. I understand that the cluster is in a different network so I tried to create a service without selector as described in this question, but no luck... I even tried to change the connection string IP address to match the one of the created service but it failed too.
My firewall is setup to accept inbound connection to 1433 port.
My SQL Server database is configured to allow remote access.
Here is the connection string I use:
"Server=172.24.144.1\\MyServer,1433;Database=TestWebapp;User Id=user_name;Password=********;"
And here is the file I use to deploy my web application:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webapp
spec:
replicas: 1
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: <private_repo_url>/webapp:db
imagePullPolicy: Always
ports:
- containerPort: 80
- containerPort: 443
- containerPort: 1433
imagePullSecrets:
- name: gitlab-auth
volumes:
- name: secrets
secret:
secretName: auth-secrets
---
apiVersion: v1
kind: Service
metadata:
name: webapp
labels:
app: webapp
spec:
type: NodePort
selector:
app: webapp
ports:
- name: port-80
port: 80
targetPort: 80
nodePort: 30080
- name: port-443
port: 443
targetPort: 443
nodePort: 30443
---
apiVersion: v1
kind: Service
metadata:
name: sql-server
labels:
app: webapp
spec:
ports:
- name: port-1433
port: 1433
targetPort: 1433
---
apiVersion: v1
kind: Endpoints
metadata:
name: sql-server
labels:
app: webapp
subsets:
- addresses:
- ip: 172.24.144.1 <-- IP of my local computer where SQL Server is running
ports:
- port: 1433
So I get a deployment named 'webapp' with only one pod, two services named 'webapp' and 'sql-server' and two endpoints also named 'webapp' and 'sql-server'. Here are their details:
> kubectl describe svc webapp
Name: webapp
Namespace: default
Labels: app=webapp
Annotations: <none>
Selector: app=webapp
Type: NodePort
IP: 10.108.225.112
Port: port-80 80/TCP
TargetPort: 80/TCP
NodePort: port-80 30080/TCP
Endpoints: 172.17.0.4:80
Port: port-443 443/TCP
TargetPort: 443/TCP
NodePort: port-443 30443/TCP
Endpoints: 172.17.0.4:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
> kubectl describe svc sql-server
Name: sql-server
Namespace: default
Labels: app=webapp
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 10.107.142.32
Port: port-1433 1433/TCP
TargetPort: 1433/TCP
Endpoints:
Session Affinity: None
Events: <none>
> kubectl describe endpoints webapp
Name: webapp
Namespace: default
Labels: app=webapp
Annotations: <none>
Subsets:
Addresses: 172.17.0.4
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
port-443 443 TCP
port-80 80 TCP
Events: <none>
> kubectl describe endpoints sql-server
Name: sql-server
Namespace: default
Labels: app=webapp
Annotations: <none>
Subsets:
Addresses: 172.24.144.1
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
<unset> 1433 TCP
Events: <none>
I am expecting to connect to the SQL Server database but when my application is trying to open the connection I get this error:
SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 40 - Could not open a connection to SQL Server)
I am new with Kubernetes and I am not very comfortable with networking so any help is welcome.
The best help would be to give me some advices/tools to debug this since I don't even know where or when the connection attempt is blocked...
Thank you!
What you consider the IP address of your host is a private IP for an internal network. It is possible that this IP address is the one that your machine uses on the "real" network you are using. The kubernetes virtual network is on a different subnet and thus - the IP that you use internally is not accessible.
subsets:
- addresses:
- ip: 172.24.144.1 <-- IP of my local computer where SQL Server is running
ports:
- port: 1433
You can connect via the DNS entry host.docker.internal
Read more here and here for windows
I am not certain if that works in minicube - there used to be a different DNS name for linux/windows implementations for the host.
If you want to use the IP (bear in mind it would change eventually), you can probably track it down and ensure it is the one "visible" from withing the virtual subnet.
PS : I am using the kubernetes that go on with docker now, seems easier to work with.

Resources