Unable to Connect to SQL Server from Istio Envoy Proxy - sql-server

I am using Istio and Envoy as sidecar proxy. I have deployed the bookinfo sample and its working fine but when I am deploying my own application which calls SQL Server on https or other external services, it gives exception.
A connection was successfully established with the server, but then an
error occurred during the pre-login handshake. (provider: TCP
Provider, error: 35 - An internal exception was caught)

To let Istio applications communicate with external TCP services,
check this blog post https://istio.io/latest/blog/2018/egress-tcp/.
To let Istio applications communicate with external HTTP and TLS services, check https://istio.io/latest/docs/tasks/traffic-management/egress/egress-control/.

I faced same issue to connect SQL server from my application, Which i have deployed
in istio enabled namespace. I created serviceentry as shown below to create accessablity.
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: sql-replica
spec:
hosts:
- SQL-DNS-NAME or IP
addresses:
- xxx.xx.x.xxx/32
ports:
- number: 5432
name: tcp
protocol: TCP
location: MESH_EXTERNAL
Here in config file xxx.xx.x.xxx ip is that IP which we get by pinging to DNS
$ kubectl apply -f access-sql-server-from-mesh.yaml

Related

How to connect remotely to SQL Server Instance Running in Minikube k8s cluster from SSMS?

I have a Windows 10 bare metal machine running a Ubuntu 20 Virtual Machine with VirtualBox.
The Ubuntu VM runs a minikube cluster (v1.25.2 with podman driver) on which a SQL Server Linux instance is deployed with the following resources:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
labels:
app: app
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: mcr.microsoft.com/mssql/server:2019-latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 1433
env:
- name: ACCEPT_EULA
value: "Y"
- name: MSSQL_SA_PASSWORD
valueFrom:
secretKeyRef:
name: mssql
key: MSSQL_SA_PASSWORD
---
apiVersion: v1
kind: Secret
metadata:
name: mssql
type: Opaque
data:
MSSQL_SA_PASSWORD: PFlvdXJTdHJvbmchUGFzc3cwcmQ+
---
apiVersion: v1
kind: Service
metadata:
name: app
spec:
selector:
app: app
ports:
- protocol: TCP
port: 1433
targetPort: 1433
type: LoadBalancer
By using minikube tunnel, I am able to expose a Load Balancer service with an external IP inside the VM and able to connect successfully with sqlcmd on the SQL Server Instance from inside the Linux VM with the Load Balancer External IP.
The Ubuntu VM is configured with a NAT network interface with port 1433 from the VM mapped on port 1433 in the Windows host:
Whenever I try to connect with SSMS from the Windows host machine I get the following error:
TITLE: Connect to Server
------------------------------
Cannot connect to 127.0.0.1.
------------------------------
ADDITIONAL INFORMATION:
A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: TCP Provider, error: 0 - The specified network name is no longer available.) (Microsoft SQL Server, Error: 64)
For help, click: https://learn.microsoft.com/sql/relational-databases/errors-events/mssqlserver-64-database-engine-error
------------------------------
The specified network name is no longer available
A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: TCP Provider, error: 0 - The specified network name is no longer available.) (.Net SqlClient Data Provider
In addition, I get the same error with sqlcmd.exe from the windows host:
SQLCMD.EXE -S 127.0.0.1 -U sa -P "<YourStrong!Passw0rd>"
Sqlcmd: Error: Microsoft ODBC Driver 17 for SQL Server : Client unable to establish connection because an error was encountered during handshakes before login. Common causes include client attempting to connect to an unsupported version of SQL Server, server too busy to accept new connections or a resource limitation (memory or maximum allowed connections) on the server..
Sqlcmd: Error: Microsoft ODBC Driver 17 for SQL Server : TCP Provider: An existing connection was forcibly closed by the remote host.
.
Sqlcmd: Error: Microsoft ODBC Driver 17 for SQL Server : Client unable to establish connection.
Sqlcmd: Error: Microsoft ODBC Driver 17 for SQL Server : Client unable to establish connection due to prelogin failure.
The connection does not timeout but rather it looks like it is interrupted by something.
A lot of resources on the internet related to error 64 seems to point to firewall misconfigurations or DNS issues.
Note that I tried the following:
I am connecting to the instance via 127.0.0.1 from the Windows host (so DNS issues are irrelevant)
Ensure that port 1433 is free on the host machine
Created firewall rule(Windows Firewall) to open outbound connections to port 1433
Port forward to the pod with kubectl port-forward but same issue
Tried to set session timeout for LanManWorkstation as suggested here without success.
What am I missing ?

helm postgres connection - unable to connect

I have uploaded some code on github: https://github.com/darkcloudi/helm-camunda-postgres
Running the following commands deploys the two charts (Note the set is required to allow the postgres db to be deployed, i've disabled it by default, as camunda comes with its own DB, i'm trying to configure it to use postgres):
helm install dev ./camunda-install --set tags.postgres=true
You will see that its all looking good:
NAME READY STATUS RESTARTS AGE
pod/dev-camunda-67f487dcd-wjdfr 1/1 Running 0 36m
pod/dev-camunda-test-connection 0/1 Completed 0 45h
pod/postgres-86c565898d-h5tf2 1/1 Running 0 36m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dev-camunda NodePort 10.106.239.96 <none> 8080:30000/TCP 36m
service/dev-postgres NodePort 10.108.235.106 <none> 5432:30001/TCP 36m
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d19h
If I either use the 10.108.x.x ip or the minikube ip 192.168.64.2 I get the same error below, I can connect to tomcat using http://camunda.minikube.local/ or http://192.168.64.2:30000/ so was wondering where I might be going wrong when attempting to connect to postgres.
kubectl exec -it postgres-86c565898d-h5tf2 -- psql -h 10.108.235.106 -U admin --password -p 30001 camunda
Password:
psql: error: could not connect to server: could not connect to server: Connection timed out
Is the server running on host "10.108.235.106" and accepting
TCP/IP connections on port 30
kubectl describe svc dev-postgres
Name: dev-postgres
Namespace: default
Labels: app.kubernetes.io/managed-by=Helm
name=dev-postgres
Annotations: meta.helm.sh/release-name: dev
meta.helm.sh/release-namespace: default
Selector: app=dev-postgres,name=dev-postgres
Type: NodePort
IP: 10.108.235.106
Port: postgres-http 5432/TCP
TargetPort: 5432/TCP
NodePort: postgres-http 30001/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
https://github.com/darkcloudi/helm-camunda-postgres/blob/master/camunda-install/charts/postgres/templates/postgres.yaml
Since you are accessing it from within the cluster you should use the ClusterIP 10.108.235.106 and port 5432.
If you wan to access it from outside the cluster then you can use Node IP 192.168.64.2 and NodePort 30001
Port 30001 is listening on the node VM and container is listening on port 5432.So you can not access it via port 30001 from within the cluster.
Edit:
The Endpoints is empty on the service. This is because the label selector on the service is selecting pods with labels app=dev-postgres,name=dev-postgres but the pods don't have that label.

Minikube Pod Connect to External Database

I have created a sample application which needs be run inside a kubernetes cluster. For now I am trying to replicate the same environment in my local machine by using Minikube.
Here I have a .Net Core WebAPI service which need to be connect to a MSSQL database. The service is running inside the cluster my the database is in my local machine. I have also created a service to access my MSSQL database engine which at outside the cluster but in my local machine.
Here is my configuration file.
My local ip address is 192.168.8.100.
apiVersion: v1
kind: Service
metadata:
name: mssql
spec:
ports:
- protocol: TCP
port: 3050
targetPort: 1433
apiVersion: v1
kind: Endpoints
metadata:
name: mssql
subsets:
- addresses:
- ip: "192.168.8.100"
ports:
- port: 1433
Pod Connection String
Server=mssql\MSSQLSERVER2017,3050;Database=product-db;User Id=sa;Password=pwd#123;MultipleActiveResultSets=true;
But with the above configuration, it doesn't seems working and throw a connection error.
Can someone please tell me where I am doing wrong.
Thanks in advance.

Can not connect to SQL Server database hosted on localhost from Kubernetes, how can I debug this?

I am trying to deploy an asp.net core 2.2 application in Kubernetes. This application is a simple web page that need an access to an SQL Server database to display some information. This database is hosted on my local development computer (localhost) and the web application is deployed in a minikube cluster to simulate the production environment where my web application could be deployed in a cloud and access a remote database.
I managed to display my web application by exposing port 80. However, I can't figure out how to make my web application connect to my SQL Server database hosted on my local computer from inside the cluster.
I assume that my connection string is correct since my web application can connect to the SQL Server database when I deploy it on an IIS local server, in a docker container (docker run) or a docker service (docker create service) but not when it is deployed in a Kubernetes cluster. I understand that the cluster is in a different network so I tried to create a service without selector as described in this question, but no luck... I even tried to change the connection string IP address to match the one of the created service but it failed too.
My firewall is setup to accept inbound connection to 1433 port.
My SQL Server database is configured to allow remote access.
Here is the connection string I use:
"Server=172.24.144.1\\MyServer,1433;Database=TestWebapp;User Id=user_name;Password=********;"
And here is the file I use to deploy my web application:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webapp
spec:
replicas: 1
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: <private_repo_url>/webapp:db
imagePullPolicy: Always
ports:
- containerPort: 80
- containerPort: 443
- containerPort: 1433
imagePullSecrets:
- name: gitlab-auth
volumes:
- name: secrets
secret:
secretName: auth-secrets
---
apiVersion: v1
kind: Service
metadata:
name: webapp
labels:
app: webapp
spec:
type: NodePort
selector:
app: webapp
ports:
- name: port-80
port: 80
targetPort: 80
nodePort: 30080
- name: port-443
port: 443
targetPort: 443
nodePort: 30443
---
apiVersion: v1
kind: Service
metadata:
name: sql-server
labels:
app: webapp
spec:
ports:
- name: port-1433
port: 1433
targetPort: 1433
---
apiVersion: v1
kind: Endpoints
metadata:
name: sql-server
labels:
app: webapp
subsets:
- addresses:
- ip: 172.24.144.1 <-- IP of my local computer where SQL Server is running
ports:
- port: 1433
So I get a deployment named 'webapp' with only one pod, two services named 'webapp' and 'sql-server' and two endpoints also named 'webapp' and 'sql-server'. Here are their details:
> kubectl describe svc webapp
Name: webapp
Namespace: default
Labels: app=webapp
Annotations: <none>
Selector: app=webapp
Type: NodePort
IP: 10.108.225.112
Port: port-80 80/TCP
TargetPort: 80/TCP
NodePort: port-80 30080/TCP
Endpoints: 172.17.0.4:80
Port: port-443 443/TCP
TargetPort: 443/TCP
NodePort: port-443 30443/TCP
Endpoints: 172.17.0.4:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
> kubectl describe svc sql-server
Name: sql-server
Namespace: default
Labels: app=webapp
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 10.107.142.32
Port: port-1433 1433/TCP
TargetPort: 1433/TCP
Endpoints:
Session Affinity: None
Events: <none>
> kubectl describe endpoints webapp
Name: webapp
Namespace: default
Labels: app=webapp
Annotations: <none>
Subsets:
Addresses: 172.17.0.4
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
port-443 443 TCP
port-80 80 TCP
Events: <none>
> kubectl describe endpoints sql-server
Name: sql-server
Namespace: default
Labels: app=webapp
Annotations: <none>
Subsets:
Addresses: 172.24.144.1
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
<unset> 1433 TCP
Events: <none>
I am expecting to connect to the SQL Server database but when my application is trying to open the connection I get this error:
SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 40 - Could not open a connection to SQL Server)
I am new with Kubernetes and I am not very comfortable with networking so any help is welcome.
The best help would be to give me some advices/tools to debug this since I don't even know where or when the connection attempt is blocked...
Thank you!
What you consider the IP address of your host is a private IP for an internal network. It is possible that this IP address is the one that your machine uses on the "real" network you are using. The kubernetes virtual network is on a different subnet and thus - the IP that you use internally is not accessible.
subsets:
- addresses:
- ip: 172.24.144.1 <-- IP of my local computer where SQL Server is running
ports:
- port: 1433
You can connect via the DNS entry host.docker.internal
Read more here and here for windows
I am not certain if that works in minicube - there used to be a different DNS name for linux/windows implementations for the host.
If you want to use the IP (bear in mind it would change eventually), you can probably track it down and ensure it is the one "visible" from withing the virtual subnet.
PS : I am using the kubernetes that go on with docker now, seems easier to work with.

dotnet core pod in Kubernetes connect to local SQL Server

I have a dotnet core pod in Kubernetes(minikube) that need to access to local SQL Server(Testing Server).
it work in the container but when i put it in to pod. it can't fine sql server on my machine
but i can ping from pod to my SQL Server
here is the error from log
An error occurred using the connection to database
> 'ArcadiaAuthenServiceDB' on server '192.168.2.68'.
> System.Data.SqlClient.SqlException (0x80131904): A network-related or
> instance-specific error occurred while establishing a connection to
> SQL Server. The server was not found or was not accessible. Verify
> that the instance name is correct and that SQL Server is configured to
> allow remote connections. (provider: TCP Provider, error: 40 - Could
> not open a connection to SQL Server)
ping
root#authenservice-dpm-57455f59cf-7rqvz:/app# ping 192.168.2.68
PING 192.168.2.68 (192.168.2.68) 56(84) bytes of data.
64 bytes from 192.168.2.68: icmp_seq=1 ttl=127 time=0.449 ms
64 bytes from 192.168.2.68: icmp_seq=2 ttl=127 time=0.361 ms
64 bytes from 192.168.2.68: icmp_seq=3 ttl=127 time=0.323 ms
64 bytes from 192.168.2.68: icmp_seq=4 ttl=127 time=0.342 ms
^C
--- 192.168.2.68 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3064ms
rtt min/avg/max/mdev = 0.323/0.368/0.449/0.053 ms
root#authenservice-dpm-57455f59cf-7rqvz:/app#
my Connection String in Container
"DefaultConnection": "Server=mssql-s; Database=ArcadiaAuthenServiceDB; MultipleActiveResultSets=true;User Id=pbts;Password=pbts"
I try to Created Service End-point in Kubernetes but no luck.
Thank you.
EDIT
Here The Service.yml File
apiVersion: v1
kind: Service
metadata:
name: mssql-s
namespace: default spec:
ports:
- port: 1433
---
apiVersion: v1
kind: Endpoints
metadata:
name: mssql-s
namespace: default subsets:
- addresses:
- ip: 192.168.2.68
ports:
- port: 1433
–--
EDIT
I check that SQL Server is Listen to 1433 as well
PS C:\Windows\system32> netstat -aon | findstr 1433
TCP 0.0.0.0:1433 0.0.0.0:0 LISTENING 5028
TCP [::]:1433 [::]:0 LISTENING 5028
Is there anything i can do to solve this problem?
Thank you for all your reply.
Today i found solution. it not about k8s but it about Firewall Setting.
I add Inbound rule to allow port 1433. And That it

Resources