helm postgres connection - unable to connect - database

I have uploaded some code on github: https://github.com/darkcloudi/helm-camunda-postgres
Running the following commands deploys the two charts (Note the set is required to allow the postgres db to be deployed, i've disabled it by default, as camunda comes with its own DB, i'm trying to configure it to use postgres):
helm install dev ./camunda-install --set tags.postgres=true
You will see that its all looking good:
NAME READY STATUS RESTARTS AGE
pod/dev-camunda-67f487dcd-wjdfr 1/1 Running 0 36m
pod/dev-camunda-test-connection 0/1 Completed 0 45h
pod/postgres-86c565898d-h5tf2 1/1 Running 0 36m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dev-camunda NodePort 10.106.239.96 <none> 8080:30000/TCP 36m
service/dev-postgres NodePort 10.108.235.106 <none> 5432:30001/TCP 36m
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d19h
If I either use the 10.108.x.x ip or the minikube ip 192.168.64.2 I get the same error below, I can connect to tomcat using http://camunda.minikube.local/ or http://192.168.64.2:30000/ so was wondering where I might be going wrong when attempting to connect to postgres.
kubectl exec -it postgres-86c565898d-h5tf2 -- psql -h 10.108.235.106 -U admin --password -p 30001 camunda
Password:
psql: error: could not connect to server: could not connect to server: Connection timed out
Is the server running on host "10.108.235.106" and accepting
TCP/IP connections on port 30
kubectl describe svc dev-postgres
Name: dev-postgres
Namespace: default
Labels: app.kubernetes.io/managed-by=Helm
name=dev-postgres
Annotations: meta.helm.sh/release-name: dev
meta.helm.sh/release-namespace: default
Selector: app=dev-postgres,name=dev-postgres
Type: NodePort
IP: 10.108.235.106
Port: postgres-http 5432/TCP
TargetPort: 5432/TCP
NodePort: postgres-http 30001/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
https://github.com/darkcloudi/helm-camunda-postgres/blob/master/camunda-install/charts/postgres/templates/postgres.yaml

Since you are accessing it from within the cluster you should use the ClusterIP 10.108.235.106 and port 5432.
If you wan to access it from outside the cluster then you can use Node IP 192.168.64.2 and NodePort 30001
Port 30001 is listening on the node VM and container is listening on port 5432.So you can not access it via port 30001 from within the cluster.
Edit:
The Endpoints is empty on the service. This is because the label selector on the service is selecting pods with labels app=dev-postgres,name=dev-postgres but the pods don't have that label.

Related

How to connect remotely to SQL Server Instance Running in Minikube k8s cluster from SSMS?

I have a Windows 10 bare metal machine running a Ubuntu 20 Virtual Machine with VirtualBox.
The Ubuntu VM runs a minikube cluster (v1.25.2 with podman driver) on which a SQL Server Linux instance is deployed with the following resources:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
labels:
app: app
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: mcr.microsoft.com/mssql/server:2019-latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 1433
env:
- name: ACCEPT_EULA
value: "Y"
- name: MSSQL_SA_PASSWORD
valueFrom:
secretKeyRef:
name: mssql
key: MSSQL_SA_PASSWORD
---
apiVersion: v1
kind: Secret
metadata:
name: mssql
type: Opaque
data:
MSSQL_SA_PASSWORD: PFlvdXJTdHJvbmchUGFzc3cwcmQ+
---
apiVersion: v1
kind: Service
metadata:
name: app
spec:
selector:
app: app
ports:
- protocol: TCP
port: 1433
targetPort: 1433
type: LoadBalancer
By using minikube tunnel, I am able to expose a Load Balancer service with an external IP inside the VM and able to connect successfully with sqlcmd on the SQL Server Instance from inside the Linux VM with the Load Balancer External IP.
The Ubuntu VM is configured with a NAT network interface with port 1433 from the VM mapped on port 1433 in the Windows host:
Whenever I try to connect with SSMS from the Windows host machine I get the following error:
TITLE: Connect to Server
------------------------------
Cannot connect to 127.0.0.1.
------------------------------
ADDITIONAL INFORMATION:
A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: TCP Provider, error: 0 - The specified network name is no longer available.) (Microsoft SQL Server, Error: 64)
For help, click: https://learn.microsoft.com/sql/relational-databases/errors-events/mssqlserver-64-database-engine-error
------------------------------
The specified network name is no longer available
A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: TCP Provider, error: 0 - The specified network name is no longer available.) (.Net SqlClient Data Provider
In addition, I get the same error with sqlcmd.exe from the windows host:
SQLCMD.EXE -S 127.0.0.1 -U sa -P "<YourStrong!Passw0rd>"
Sqlcmd: Error: Microsoft ODBC Driver 17 for SQL Server : Client unable to establish connection because an error was encountered during handshakes before login. Common causes include client attempting to connect to an unsupported version of SQL Server, server too busy to accept new connections or a resource limitation (memory or maximum allowed connections) on the server..
Sqlcmd: Error: Microsoft ODBC Driver 17 for SQL Server : TCP Provider: An existing connection was forcibly closed by the remote host.
.
Sqlcmd: Error: Microsoft ODBC Driver 17 for SQL Server : Client unable to establish connection.
Sqlcmd: Error: Microsoft ODBC Driver 17 for SQL Server : Client unable to establish connection due to prelogin failure.
The connection does not timeout but rather it looks like it is interrupted by something.
A lot of resources on the internet related to error 64 seems to point to firewall misconfigurations or DNS issues.
Note that I tried the following:
I am connecting to the instance via 127.0.0.1 from the Windows host (so DNS issues are irrelevant)
Ensure that port 1433 is free on the host machine
Created firewall rule(Windows Firewall) to open outbound connections to port 1433
Port forward to the pod with kubectl port-forward but same issue
Tried to set session timeout for LanManWorkstation as suggested here without success.
What am I missing ?

Minikube Pod Connect to External Database

I have created a sample application which needs be run inside a kubernetes cluster. For now I am trying to replicate the same environment in my local machine by using Minikube.
Here I have a .Net Core WebAPI service which need to be connect to a MSSQL database. The service is running inside the cluster my the database is in my local machine. I have also created a service to access my MSSQL database engine which at outside the cluster but in my local machine.
Here is my configuration file.
My local ip address is 192.168.8.100.
apiVersion: v1
kind: Service
metadata:
name: mssql
spec:
ports:
- protocol: TCP
port: 3050
targetPort: 1433
apiVersion: v1
kind: Endpoints
metadata:
name: mssql
subsets:
- addresses:
- ip: "192.168.8.100"
ports:
- port: 1433
Pod Connection String
Server=mssql\MSSQLSERVER2017,3050;Database=product-db;User Id=sa;Password=pwd#123;MultipleActiveResultSets=true;
But with the above configuration, it doesn't seems working and throw a connection error.
Can someone please tell me where I am doing wrong.
Thanks in advance.

Using NodePort to connect to Database

Situation:
I work at big company and we have a k8s cluster.
We also have a database that is hosted somewhere else outside of the cluster.
The database IP-Adress and the Cluster have a bi-directional FW Clearance.
So applications that are hosted inside the cluster can connect to the database.
My machine does not have any clearance to the Database. I cannot test my app, write queries and so on. That slows me down and forces me to go to work, if any database-operations are required for it.
Question:
Since I can connect and deploy on the cluster. Could I deploy a NodePort/Service/etc to a service which forwards it directly to the database?
In this way it would "fool" the Database that the request comes from the cluster. But instead it comes from my machine at home.
Has anybody tried something like that?
Thanks in advance
you won't be able to set up a proxy that way. If you have an application that receives requests and forwards them to your database.
An easier solution would be to deploy your app to the cluster (if possible) or deploy a test pod to the cluster (using a base image like debian, busybox, or alpine linux; which ever image serves your needs). You can then connect to the pod (kubectl exec -it podname).
You could try to use NodePort service without selector and define your own endpoint pointing to database IP address.
For instance:
vagrant#k8sMaster:~$ nslookup google.fr
Name: google.fr
Address: 216.58.207.131
echo '
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
ports:
- protocol: TCP
port: 443
targetPort: 443
' >> svc-no-sel.yaml
echo '
apiVersion: v1
kind: Endpoints
metadata:
name: my-service
subsets:
- addresses:
- ip: 216.58.207.131
ports:
- port: 443
' >> ep-no-sel.yaml
k apply -f svc-no-sel.yaml
k apply -f ep-no-sel.yaml
Where you replace google IP/Port by your database IP/Port.
Then in the given example you can target the service by doing
curl -k https://<node-ip>:<node-port>
Documentation on service without selector here: https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors

How do I connect a kubernetes cluster to an external SQL Server database using docker desktop?

I need to know how to connect my Kubernetes cluster to an external SQL Server database running in a docker image outside of the Kubernetes cluster.
I currently have two pods in my cluster that are running, each has a different image in it created from asp.net core applications. There is a completely separate (outside of Kubernetes but running locally on my machine localhost,1433) docker image that hosts a SQL Server database. I need the applications in my Kubernetes pods to be able to reach and manipulate that database. I have tried creating a YAML file and configuring different ports but I do not know how to get this working, or how to test that it actually is working after setting it up. I need the exact steps/commands to create a service capable of routing a connection from the images in my cluster to the DB and back.
Docker SQL Server creation (elevated powershell/docker desktop):
docker pull mcr.microsoft.com/mssql/server:2017-latest
docker run -d -p 1433:1433 --name sql -v "c:/Temp/DockerShared:/host_mount" -e SA_PASSWORD="aPasswordPassword" -e ACCEPT_EULA=Y mcr.microsoft.com/mssql/server:2017-latest
definitions.yaml
#Pods in the cluster
apiVersion: v1
kind: Pod
metadata:
name: pod-1
labels:
app: podnet
type: module
spec:
containers:
- name: container1
image: username/image1
---
apiVersion: v1
kind: Pod
metadata:
name: pod-2
labels:
app: podnet
type: module
spec:
containers:
- name: container2
image: username/image2
---
#Service created in an attempt to contact external SQL Server DB
apiVersion: v1
kind: Service
metadata:
name: ext-sql-service
spec:
ports:
- port: 1433
targetPort: 1433
type: ClusterIP
---
apiVersion: v1
kind: Endpoints
metadata:
name: ext-sql-service
subsets:
- addresses:
- ip: (Docker IP for DB Instance)
ports:
- port: 1433
Ideally I would like applications in my kubernetes cluster to be able to manipulate the SQL Server I already have set up (running outside of the cluster but locally on my machine).
When running from local docker, you connection string is NOT your local machine.
It is the local docker "world", that happens to be running on your machine.
host.docker.internal:1433
The above is docker container talking to your local machine. Obviously, the port could be different based on how you exposed it.
......
If you're trying to get your running container to talk to sql-server which is ALSO running inside of the docker world, that connection string looks like:
ServerName:
my-mssql-service-deployment-name.$_CUSTOMNAMESPACENAME.svc.cluster.local
Where $_CUSTOMNAMESPACENAME is probably "default", but you may be running a different namespace.
my-mssql-service-deployment-name is the name of YOUR deployment (I have it stubbed here)
Note there is no port number here.
This is documented here:
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services
Problem may be in kind of service you put. ClusterIP enable you juest to connect among pods inside cluster.
To connect to external service you should just change definition of service kind as NodePort.
Try to change service definition:
#Service created in an attempt to contact external SQL Server DB
apiVersion: v1
kind: Service
metadata:
name: ext-sql-service
spec:
type: NodePort
ports:
- port: 1433
targetPort: 1433
and execute command:
$ kubectl apply -f your_service_definition_file_name.yaml
Remember to run this command in proper namespace, where your deployment is configured.
Bad practice is to overlay an environment variable onto the container. And with "docker run" pass that environment variable VALUE to the container.
Of course in context of executing docker command
$ docker run -d -p 1433:1433 --name sql -v "c:/Temp/DockerShared:/host_mount" -e SA_PASSWORD="aPasswordPassword" -e ACCEPT_EULA=Y mcr.microsoft.com/mssql/server:2017-latest
Putting the db-password visible is insecure. Use Kubernetes secrets.
More information you can find here: kubernetes-secret.

Can not connect to SQL Server database hosted on localhost from Kubernetes, how can I debug this?

I am trying to deploy an asp.net core 2.2 application in Kubernetes. This application is a simple web page that need an access to an SQL Server database to display some information. This database is hosted on my local development computer (localhost) and the web application is deployed in a minikube cluster to simulate the production environment where my web application could be deployed in a cloud and access a remote database.
I managed to display my web application by exposing port 80. However, I can't figure out how to make my web application connect to my SQL Server database hosted on my local computer from inside the cluster.
I assume that my connection string is correct since my web application can connect to the SQL Server database when I deploy it on an IIS local server, in a docker container (docker run) or a docker service (docker create service) but not when it is deployed in a Kubernetes cluster. I understand that the cluster is in a different network so I tried to create a service without selector as described in this question, but no luck... I even tried to change the connection string IP address to match the one of the created service but it failed too.
My firewall is setup to accept inbound connection to 1433 port.
My SQL Server database is configured to allow remote access.
Here is the connection string I use:
"Server=172.24.144.1\\MyServer,1433;Database=TestWebapp;User Id=user_name;Password=********;"
And here is the file I use to deploy my web application:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webapp
spec:
replicas: 1
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: <private_repo_url>/webapp:db
imagePullPolicy: Always
ports:
- containerPort: 80
- containerPort: 443
- containerPort: 1433
imagePullSecrets:
- name: gitlab-auth
volumes:
- name: secrets
secret:
secretName: auth-secrets
---
apiVersion: v1
kind: Service
metadata:
name: webapp
labels:
app: webapp
spec:
type: NodePort
selector:
app: webapp
ports:
- name: port-80
port: 80
targetPort: 80
nodePort: 30080
- name: port-443
port: 443
targetPort: 443
nodePort: 30443
---
apiVersion: v1
kind: Service
metadata:
name: sql-server
labels:
app: webapp
spec:
ports:
- name: port-1433
port: 1433
targetPort: 1433
---
apiVersion: v1
kind: Endpoints
metadata:
name: sql-server
labels:
app: webapp
subsets:
- addresses:
- ip: 172.24.144.1 <-- IP of my local computer where SQL Server is running
ports:
- port: 1433
So I get a deployment named 'webapp' with only one pod, two services named 'webapp' and 'sql-server' and two endpoints also named 'webapp' and 'sql-server'. Here are their details:
> kubectl describe svc webapp
Name: webapp
Namespace: default
Labels: app=webapp
Annotations: <none>
Selector: app=webapp
Type: NodePort
IP: 10.108.225.112
Port: port-80 80/TCP
TargetPort: 80/TCP
NodePort: port-80 30080/TCP
Endpoints: 172.17.0.4:80
Port: port-443 443/TCP
TargetPort: 443/TCP
NodePort: port-443 30443/TCP
Endpoints: 172.17.0.4:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
> kubectl describe svc sql-server
Name: sql-server
Namespace: default
Labels: app=webapp
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 10.107.142.32
Port: port-1433 1433/TCP
TargetPort: 1433/TCP
Endpoints:
Session Affinity: None
Events: <none>
> kubectl describe endpoints webapp
Name: webapp
Namespace: default
Labels: app=webapp
Annotations: <none>
Subsets:
Addresses: 172.17.0.4
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
port-443 443 TCP
port-80 80 TCP
Events: <none>
> kubectl describe endpoints sql-server
Name: sql-server
Namespace: default
Labels: app=webapp
Annotations: <none>
Subsets:
Addresses: 172.24.144.1
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
<unset> 1433 TCP
Events: <none>
I am expecting to connect to the SQL Server database but when my application is trying to open the connection I get this error:
SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 40 - Could not open a connection to SQL Server)
I am new with Kubernetes and I am not very comfortable with networking so any help is welcome.
The best help would be to give me some advices/tools to debug this since I don't even know where or when the connection attempt is blocked...
Thank you!
What you consider the IP address of your host is a private IP for an internal network. It is possible that this IP address is the one that your machine uses on the "real" network you are using. The kubernetes virtual network is on a different subnet and thus - the IP that you use internally is not accessible.
subsets:
- addresses:
- ip: 172.24.144.1 <-- IP of my local computer where SQL Server is running
ports:
- port: 1433
You can connect via the DNS entry host.docker.internal
Read more here and here for windows
I am not certain if that works in minicube - there used to be a different DNS name for linux/windows implementations for the host.
If you want to use the IP (bear in mind it would change eventually), you can probably track it down and ensure it is the one "visible" from withing the virtual subnet.
PS : I am using the kubernetes that go on with docker now, seems easier to work with.

Resources