Minikube Pod Connect to External Database - sql-server

I have created a sample application which needs be run inside a kubernetes cluster. For now I am trying to replicate the same environment in my local machine by using Minikube.
Here I have a .Net Core WebAPI service which need to be connect to a MSSQL database. The service is running inside the cluster my the database is in my local machine. I have also created a service to access my MSSQL database engine which at outside the cluster but in my local machine.
Here is my configuration file.
My local ip address is 192.168.8.100.
apiVersion: v1
kind: Service
metadata:
name: mssql
spec:
ports:
- protocol: TCP
port: 3050
targetPort: 1433
apiVersion: v1
kind: Endpoints
metadata:
name: mssql
subsets:
- addresses:
- ip: "192.168.8.100"
ports:
- port: 1433
Pod Connection String
Server=mssql\MSSQLSERVER2017,3050;Database=product-db;User Id=sa;Password=pwd#123;MultipleActiveResultSets=true;
But with the above configuration, it doesn't seems working and throw a connection error.
Can someone please tell me where I am doing wrong.
Thanks in advance.

Related

How to connect a Kubernetes cluster to and external SQL Server

I have a running Kubernetes cluster running multiple nodes. I want to connect my application (built on .Net) to an external SQL Server.
My connection string Looks like this
"ConnectionStrings": {
"DefaultConnection": "Server= 192.168.2.68; database=ArcadiaAuthenServiceDB;user id=pbts;password=exTened;MultipleActiveResultSets=True;"
},
I was able to connect to DB Environment by whitelisting each node IP to allow connection to the external SQL Server.
I can successfully telnet DB IP and port.
I added this Server= 192.168.2.68; database=ArcadiaAuthenServiceDB;user id=pbts;password=exTened;MultipleActiveResultSets=True; to configmap on K8s
In my `service.YAML deployment I added the configmap as an environmental variable. And was able to connect.
Instead of using configmap I'd love to use secrets, how can I achieve this?
However, I know this connection isn't secured enough. What best practice can I use to achieve this connectivity.
How can I ensure that I do not have to always whitelist node IP if I choose to increase k8s nodes?
apiVersion: v1
kind: ConfigMap
metadata:
name: mssql-config
namespace: dev
data:
database_uri: Server= 192.168.2.68; database=ArcadiaAuthenServiceDB;user id=pbts;password=exTened;MultipleActiveResultSets=True;
Environmental Variables
env:
- name: Logging__LogLevel__Default
value: Debug
- name: Logging__LogLevel__Microsoft.AspNetCore
value: Debug
- name: ConnectionStrings__DefaultConnection
valueFrom:
configMapKeyRef:
name: mssql-config
key: database_uri

Unable to see my react application after deploying into Kubernetes

I've followed this reference to deploy my simple react application into Kubernetes.
https://medium.com/bb-tutorials-and-thoughts/aws-deploying-react-app-with-nodejs-backend-on-eks-e5663cb5017f
But after deploying, I can't see my application in the browser.
So I tried to set external ip address using this command line
kubectl patch svc XXX -p '{"spec":{"externalIPs":["10.2.8.19"]}}'
Reference is here
Assign External IP to a Kubernetes Service
But I can't see my application deployed in the browser.
http://10.2.8.192:3000
Here is my deployment.yml file
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: test-app
name: test-app
spec:
replicas: 5
selector:
matchLabels:
app: test-app
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: test-app
spec:
containers:
- image: XXX.dkr.ecr.XXX.amazonaws.com/XXX/XXX:v1
name: test-app
imagePullPolicy: Always
resources: {}
ports:
- containerPort: 3000
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: test-app
labels:
run: test-app
spec:
ports:
- port: 3000
protocol: TCP
selector:
app: test-app
type: NodePort
Please give me any advice. Thank you...
You are mixing 2 ways to expose service outside. You want to use NodePort but you are setting ExternalIP.
Issue root cause
In your setup, you are using NodePort so you need to use ExternalIP of the node with NodePort which is 31300 (more details below).
Setting ExternalIP in NodePort service in this setup is pointless (also 10.2.8.19 is InternalIP which allows you to connect only in cluster).
In your example, you are trying to reach application using your application port number, but you should use service nodePort number, which is 31300.
Note
A Service can map any incoming port to a targetPort. By default and for convenience, the targetPort is set to the same value as the port field.
Exposing application
Generally, you have 3 main ways to expose your application:
NodePort
I don't have access to medium article, but I guess in this tutorial NodePort was used.
In this configuration, you have to use serviceType: NodePort. To connect using nodeport, you have to use ExternalHostIP:NodePort. As you are using cloud environment, your VM should already have ExternalIP.
ExternalHostIP is IP address of node, where application endpoint pod was deployed. To get ExternalIP of Node you can use command.
$ kubectl get node -o wide
To get information on what node, specific pod was deployed you can execute command
`$ kubectl get po -o wide
NodePort number is assigned from range 30000-32767.
IMPORTANT
Please remember to configure fiewall to allow traffic on this specific port, or if it's just for testing you can allow whole range 30000-32767.
Example
Let's say your application pod was deployed on Node with ExternalIP: 35.228.76.198 and your service NodePort is 31300.
If you configured firewall rules correctly and right containerPort (your application must listen on this port) was set, when you will use 35.228.76.198:31300 in the browser, you should reach your application.
LoadBalancer
In this option, service is LoadBalancer type, which means that cloud is creating LB with ExternalIP. You just need to enter this IP in your browser to reach your application. However, please remember that LoadBalancer is extra paid.
Ingress
In this option you have to use some kind of Ingress Controller. Most common is Nginx Ingress Controller. In this option, depends of your needs you can create Ingress as NodePort or LoadBalancer option.
Usefull links
Service
Exposing application in AWS
Nginx Ingress on AWS
Please let me know if you was able to reach your application or you have further questions.
If you want to expose your application with a NodePort you can have a look How do I expose the Kubernetes services running on my Amazon EKS cluster?:
It looks like your Deployment is missing a targetPort.
kubectl get nodes should return the NodeIP.
NodeIP:NodePort should be reachable if you enable the security group of the nodes to allow incoming traffic through port 31300.

Using NodePort to connect to Database

Situation:
I work at big company and we have a k8s cluster.
We also have a database that is hosted somewhere else outside of the cluster.
The database IP-Adress and the Cluster have a bi-directional FW Clearance.
So applications that are hosted inside the cluster can connect to the database.
My machine does not have any clearance to the Database. I cannot test my app, write queries and so on. That slows me down and forces me to go to work, if any database-operations are required for it.
Question:
Since I can connect and deploy on the cluster. Could I deploy a NodePort/Service/etc to a service which forwards it directly to the database?
In this way it would "fool" the Database that the request comes from the cluster. But instead it comes from my machine at home.
Has anybody tried something like that?
Thanks in advance
you won't be able to set up a proxy that way. If you have an application that receives requests and forwards them to your database.
An easier solution would be to deploy your app to the cluster (if possible) or deploy a test pod to the cluster (using a base image like debian, busybox, or alpine linux; which ever image serves your needs). You can then connect to the pod (kubectl exec -it podname).
You could try to use NodePort service without selector and define your own endpoint pointing to database IP address.
For instance:
vagrant#k8sMaster:~$ nslookup google.fr
Name: google.fr
Address: 216.58.207.131
echo '
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
ports:
- protocol: TCP
port: 443
targetPort: 443
' >> svc-no-sel.yaml
echo '
apiVersion: v1
kind: Endpoints
metadata:
name: my-service
subsets:
- addresses:
- ip: 216.58.207.131
ports:
- port: 443
' >> ep-no-sel.yaml
k apply -f svc-no-sel.yaml
k apply -f ep-no-sel.yaml
Where you replace google IP/Port by your database IP/Port.
Then in the given example you can target the service by doing
curl -k https://<node-ip>:<node-port>
Documentation on service without selector here: https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors

How do I connect a kubernetes cluster to an external SQL Server database using docker desktop?

I need to know how to connect my Kubernetes cluster to an external SQL Server database running in a docker image outside of the Kubernetes cluster.
I currently have two pods in my cluster that are running, each has a different image in it created from asp.net core applications. There is a completely separate (outside of Kubernetes but running locally on my machine localhost,1433) docker image that hosts a SQL Server database. I need the applications in my Kubernetes pods to be able to reach and manipulate that database. I have tried creating a YAML file and configuring different ports but I do not know how to get this working, or how to test that it actually is working after setting it up. I need the exact steps/commands to create a service capable of routing a connection from the images in my cluster to the DB and back.
Docker SQL Server creation (elevated powershell/docker desktop):
docker pull mcr.microsoft.com/mssql/server:2017-latest
docker run -d -p 1433:1433 --name sql -v "c:/Temp/DockerShared:/host_mount" -e SA_PASSWORD="aPasswordPassword" -e ACCEPT_EULA=Y mcr.microsoft.com/mssql/server:2017-latest
definitions.yaml
#Pods in the cluster
apiVersion: v1
kind: Pod
metadata:
name: pod-1
labels:
app: podnet
type: module
spec:
containers:
- name: container1
image: username/image1
---
apiVersion: v1
kind: Pod
metadata:
name: pod-2
labels:
app: podnet
type: module
spec:
containers:
- name: container2
image: username/image2
---
#Service created in an attempt to contact external SQL Server DB
apiVersion: v1
kind: Service
metadata:
name: ext-sql-service
spec:
ports:
- port: 1433
targetPort: 1433
type: ClusterIP
---
apiVersion: v1
kind: Endpoints
metadata:
name: ext-sql-service
subsets:
- addresses:
- ip: (Docker IP for DB Instance)
ports:
- port: 1433
Ideally I would like applications in my kubernetes cluster to be able to manipulate the SQL Server I already have set up (running outside of the cluster but locally on my machine).
When running from local docker, you connection string is NOT your local machine.
It is the local docker "world", that happens to be running on your machine.
host.docker.internal:1433
The above is docker container talking to your local machine. Obviously, the port could be different based on how you exposed it.
......
If you're trying to get your running container to talk to sql-server which is ALSO running inside of the docker world, that connection string looks like:
ServerName:
my-mssql-service-deployment-name.$_CUSTOMNAMESPACENAME.svc.cluster.local
Where $_CUSTOMNAMESPACENAME is probably "default", but you may be running a different namespace.
my-mssql-service-deployment-name is the name of YOUR deployment (I have it stubbed here)
Note there is no port number here.
This is documented here:
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services
Problem may be in kind of service you put. ClusterIP enable you juest to connect among pods inside cluster.
To connect to external service you should just change definition of service kind as NodePort.
Try to change service definition:
#Service created in an attempt to contact external SQL Server DB
apiVersion: v1
kind: Service
metadata:
name: ext-sql-service
spec:
type: NodePort
ports:
- port: 1433
targetPort: 1433
and execute command:
$ kubectl apply -f your_service_definition_file_name.yaml
Remember to run this command in proper namespace, where your deployment is configured.
Bad practice is to overlay an environment variable onto the container. And with "docker run" pass that environment variable VALUE to the container.
Of course in context of executing docker command
$ docker run -d -p 1433:1433 --name sql -v "c:/Temp/DockerShared:/host_mount" -e SA_PASSWORD="aPasswordPassword" -e ACCEPT_EULA=Y mcr.microsoft.com/mssql/server:2017-latest
Putting the db-password visible is insecure. Use Kubernetes secrets.
More information you can find here: kubernetes-secret.

Can not connect to SQL Server database hosted on localhost from Kubernetes, how can I debug this?

I am trying to deploy an asp.net core 2.2 application in Kubernetes. This application is a simple web page that need an access to an SQL Server database to display some information. This database is hosted on my local development computer (localhost) and the web application is deployed in a minikube cluster to simulate the production environment where my web application could be deployed in a cloud and access a remote database.
I managed to display my web application by exposing port 80. However, I can't figure out how to make my web application connect to my SQL Server database hosted on my local computer from inside the cluster.
I assume that my connection string is correct since my web application can connect to the SQL Server database when I deploy it on an IIS local server, in a docker container (docker run) or a docker service (docker create service) but not when it is deployed in a Kubernetes cluster. I understand that the cluster is in a different network so I tried to create a service without selector as described in this question, but no luck... I even tried to change the connection string IP address to match the one of the created service but it failed too.
My firewall is setup to accept inbound connection to 1433 port.
My SQL Server database is configured to allow remote access.
Here is the connection string I use:
"Server=172.24.144.1\\MyServer,1433;Database=TestWebapp;User Id=user_name;Password=********;"
And here is the file I use to deploy my web application:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webapp
spec:
replicas: 1
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: <private_repo_url>/webapp:db
imagePullPolicy: Always
ports:
- containerPort: 80
- containerPort: 443
- containerPort: 1433
imagePullSecrets:
- name: gitlab-auth
volumes:
- name: secrets
secret:
secretName: auth-secrets
---
apiVersion: v1
kind: Service
metadata:
name: webapp
labels:
app: webapp
spec:
type: NodePort
selector:
app: webapp
ports:
- name: port-80
port: 80
targetPort: 80
nodePort: 30080
- name: port-443
port: 443
targetPort: 443
nodePort: 30443
---
apiVersion: v1
kind: Service
metadata:
name: sql-server
labels:
app: webapp
spec:
ports:
- name: port-1433
port: 1433
targetPort: 1433
---
apiVersion: v1
kind: Endpoints
metadata:
name: sql-server
labels:
app: webapp
subsets:
- addresses:
- ip: 172.24.144.1 <-- IP of my local computer where SQL Server is running
ports:
- port: 1433
So I get a deployment named 'webapp' with only one pod, two services named 'webapp' and 'sql-server' and two endpoints also named 'webapp' and 'sql-server'. Here are their details:
> kubectl describe svc webapp
Name: webapp
Namespace: default
Labels: app=webapp
Annotations: <none>
Selector: app=webapp
Type: NodePort
IP: 10.108.225.112
Port: port-80 80/TCP
TargetPort: 80/TCP
NodePort: port-80 30080/TCP
Endpoints: 172.17.0.4:80
Port: port-443 443/TCP
TargetPort: 443/TCP
NodePort: port-443 30443/TCP
Endpoints: 172.17.0.4:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
> kubectl describe svc sql-server
Name: sql-server
Namespace: default
Labels: app=webapp
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 10.107.142.32
Port: port-1433 1433/TCP
TargetPort: 1433/TCP
Endpoints:
Session Affinity: None
Events: <none>
> kubectl describe endpoints webapp
Name: webapp
Namespace: default
Labels: app=webapp
Annotations: <none>
Subsets:
Addresses: 172.17.0.4
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
port-443 443 TCP
port-80 80 TCP
Events: <none>
> kubectl describe endpoints sql-server
Name: sql-server
Namespace: default
Labels: app=webapp
Annotations: <none>
Subsets:
Addresses: 172.24.144.1
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
<unset> 1433 TCP
Events: <none>
I am expecting to connect to the SQL Server database but when my application is trying to open the connection I get this error:
SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 40 - Could not open a connection to SQL Server)
I am new with Kubernetes and I am not very comfortable with networking so any help is welcome.
The best help would be to give me some advices/tools to debug this since I don't even know where or when the connection attempt is blocked...
Thank you!
What you consider the IP address of your host is a private IP for an internal network. It is possible that this IP address is the one that your machine uses on the "real" network you are using. The kubernetes virtual network is on a different subnet and thus - the IP that you use internally is not accessible.
subsets:
- addresses:
- ip: 172.24.144.1 <-- IP of my local computer where SQL Server is running
ports:
- port: 1433
You can connect via the DNS entry host.docker.internal
Read more here and here for windows
I am not certain if that works in minicube - there used to be a different DNS name for linux/windows implementations for the host.
If you want to use the IP (bear in mind it would change eventually), you can probably track it down and ensure it is the one "visible" from withing the virtual subnet.
PS : I am using the kubernetes that go on with docker now, seems easier to work with.

Resources