Situation:
I work at big company and we have a k8s cluster.
We also have a database that is hosted somewhere else outside of the cluster.
The database IP-Adress and the Cluster have a bi-directional FW Clearance.
So applications that are hosted inside the cluster can connect to the database.
My machine does not have any clearance to the Database. I cannot test my app, write queries and so on. That slows me down and forces me to go to work, if any database-operations are required for it.
Question:
Since I can connect and deploy on the cluster. Could I deploy a NodePort/Service/etc to a service which forwards it directly to the database?
In this way it would "fool" the Database that the request comes from the cluster. But instead it comes from my machine at home.
Has anybody tried something like that?
Thanks in advance
you won't be able to set up a proxy that way. If you have an application that receives requests and forwards them to your database.
An easier solution would be to deploy your app to the cluster (if possible) or deploy a test pod to the cluster (using a base image like debian, busybox, or alpine linux; which ever image serves your needs). You can then connect to the pod (kubectl exec -it podname).
You could try to use NodePort service without selector and define your own endpoint pointing to database IP address.
For instance:
vagrant#k8sMaster:~$ nslookup google.fr
Name: google.fr
Address: 216.58.207.131
echo '
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
ports:
- protocol: TCP
port: 443
targetPort: 443
' >> svc-no-sel.yaml
echo '
apiVersion: v1
kind: Endpoints
metadata:
name: my-service
subsets:
- addresses:
- ip: 216.58.207.131
ports:
- port: 443
' >> ep-no-sel.yaml
k apply -f svc-no-sel.yaml
k apply -f ep-no-sel.yaml
Where you replace google IP/Port by your database IP/Port.
Then in the given example you can target the service by doing
curl -k https://<node-ip>:<node-port>
Documentation on service without selector here: https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors
Related
I have a running Kubernetes cluster running multiple nodes. I want to connect my application (built on .Net) to an external SQL Server.
My connection string Looks like this
"ConnectionStrings": {
"DefaultConnection": "Server= 192.168.2.68; database=ArcadiaAuthenServiceDB;user id=pbts;password=exTened;MultipleActiveResultSets=True;"
},
I was able to connect to DB Environment by whitelisting each node IP to allow connection to the external SQL Server.
I can successfully telnet DB IP and port.
I added this Server= 192.168.2.68; database=ArcadiaAuthenServiceDB;user id=pbts;password=exTened;MultipleActiveResultSets=True; to configmap on K8s
In my `service.YAML deployment I added the configmap as an environmental variable. And was able to connect.
Instead of using configmap I'd love to use secrets, how can I achieve this?
However, I know this connection isn't secured enough. What best practice can I use to achieve this connectivity.
How can I ensure that I do not have to always whitelist node IP if I choose to increase k8s nodes?
apiVersion: v1
kind: ConfigMap
metadata:
name: mssql-config
namespace: dev
data:
database_uri: Server= 192.168.2.68; database=ArcadiaAuthenServiceDB;user id=pbts;password=exTened;MultipleActiveResultSets=True;
Environmental Variables
env:
- name: Logging__LogLevel__Default
value: Debug
- name: Logging__LogLevel__Microsoft.AspNetCore
value: Debug
- name: ConnectionStrings__DefaultConnection
valueFrom:
configMapKeyRef:
name: mssql-config
key: database_uri
I've followed this reference to deploy my simple react application into Kubernetes.
https://medium.com/bb-tutorials-and-thoughts/aws-deploying-react-app-with-nodejs-backend-on-eks-e5663cb5017f
But after deploying, I can't see my application in the browser.
So I tried to set external ip address using this command line
kubectl patch svc XXX -p '{"spec":{"externalIPs":["10.2.8.19"]}}'
Reference is here
Assign External IP to a Kubernetes Service
But I can't see my application deployed in the browser.
http://10.2.8.192:3000
Here is my deployment.yml file
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: test-app
name: test-app
spec:
replicas: 5
selector:
matchLabels:
app: test-app
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: test-app
spec:
containers:
- image: XXX.dkr.ecr.XXX.amazonaws.com/XXX/XXX:v1
name: test-app
imagePullPolicy: Always
resources: {}
ports:
- containerPort: 3000
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: test-app
labels:
run: test-app
spec:
ports:
- port: 3000
protocol: TCP
selector:
app: test-app
type: NodePort
Please give me any advice. Thank you...
You are mixing 2 ways to expose service outside. You want to use NodePort but you are setting ExternalIP.
Issue root cause
In your setup, you are using NodePort so you need to use ExternalIP of the node with NodePort which is 31300 (more details below).
Setting ExternalIP in NodePort service in this setup is pointless (also 10.2.8.19 is InternalIP which allows you to connect only in cluster).
In your example, you are trying to reach application using your application port number, but you should use service nodePort number, which is 31300.
Note
A Service can map any incoming port to a targetPort. By default and for convenience, the targetPort is set to the same value as the port field.
Exposing application
Generally, you have 3 main ways to expose your application:
NodePort
I don't have access to medium article, but I guess in this tutorial NodePort was used.
In this configuration, you have to use serviceType: NodePort. To connect using nodeport, you have to use ExternalHostIP:NodePort. As you are using cloud environment, your VM should already have ExternalIP.
ExternalHostIP is IP address of node, where application endpoint pod was deployed. To get ExternalIP of Node you can use command.
$ kubectl get node -o wide
To get information on what node, specific pod was deployed you can execute command
`$ kubectl get po -o wide
NodePort number is assigned from range 30000-32767.
IMPORTANT
Please remember to configure fiewall to allow traffic on this specific port, or if it's just for testing you can allow whole range 30000-32767.
Example
Let's say your application pod was deployed on Node with ExternalIP: 35.228.76.198 and your service NodePort is 31300.
If you configured firewall rules correctly and right containerPort (your application must listen on this port) was set, when you will use 35.228.76.198:31300 in the browser, you should reach your application.
LoadBalancer
In this option, service is LoadBalancer type, which means that cloud is creating LB with ExternalIP. You just need to enter this IP in your browser to reach your application. However, please remember that LoadBalancer is extra paid.
Ingress
In this option you have to use some kind of Ingress Controller. Most common is Nginx Ingress Controller. In this option, depends of your needs you can create Ingress as NodePort or LoadBalancer option.
Usefull links
Service
Exposing application in AWS
Nginx Ingress on AWS
Please let me know if you was able to reach your application or you have further questions.
If you want to expose your application with a NodePort you can have a look How do I expose the Kubernetes services running on my Amazon EKS cluster?:
It looks like your Deployment is missing a targetPort.
kubectl get nodes should return the NodeIP.
NodeIP:NodePort should be reachable if you enable the security group of the nodes to allow incoming traffic through port 31300.
I have uploaded some code on github: https://github.com/darkcloudi/helm-camunda-postgres
Running the following commands deploys the two charts (Note the set is required to allow the postgres db to be deployed, i've disabled it by default, as camunda comes with its own DB, i'm trying to configure it to use postgres):
helm install dev ./camunda-install --set tags.postgres=true
You will see that its all looking good:
NAME READY STATUS RESTARTS AGE
pod/dev-camunda-67f487dcd-wjdfr 1/1 Running 0 36m
pod/dev-camunda-test-connection 0/1 Completed 0 45h
pod/postgres-86c565898d-h5tf2 1/1 Running 0 36m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dev-camunda NodePort 10.106.239.96 <none> 8080:30000/TCP 36m
service/dev-postgres NodePort 10.108.235.106 <none> 5432:30001/TCP 36m
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d19h
If I either use the 10.108.x.x ip or the minikube ip 192.168.64.2 I get the same error below, I can connect to tomcat using http://camunda.minikube.local/ or http://192.168.64.2:30000/ so was wondering where I might be going wrong when attempting to connect to postgres.
kubectl exec -it postgres-86c565898d-h5tf2 -- psql -h 10.108.235.106 -U admin --password -p 30001 camunda
Password:
psql: error: could not connect to server: could not connect to server: Connection timed out
Is the server running on host "10.108.235.106" and accepting
TCP/IP connections on port 30
kubectl describe svc dev-postgres
Name: dev-postgres
Namespace: default
Labels: app.kubernetes.io/managed-by=Helm
name=dev-postgres
Annotations: meta.helm.sh/release-name: dev
meta.helm.sh/release-namespace: default
Selector: app=dev-postgres,name=dev-postgres
Type: NodePort
IP: 10.108.235.106
Port: postgres-http 5432/TCP
TargetPort: 5432/TCP
NodePort: postgres-http 30001/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
https://github.com/darkcloudi/helm-camunda-postgres/blob/master/camunda-install/charts/postgres/templates/postgres.yaml
Since you are accessing it from within the cluster you should use the ClusterIP 10.108.235.106 and port 5432.
If you wan to access it from outside the cluster then you can use Node IP 192.168.64.2 and NodePort 30001
Port 30001 is listening on the node VM and container is listening on port 5432.So you can not access it via port 30001 from within the cluster.
Edit:
The Endpoints is empty on the service. This is because the label selector on the service is selecting pods with labels app=dev-postgres,name=dev-postgres but the pods don't have that label.
I have created a sample application which needs be run inside a kubernetes cluster. For now I am trying to replicate the same environment in my local machine by using Minikube.
Here I have a .Net Core WebAPI service which need to be connect to a MSSQL database. The service is running inside the cluster my the database is in my local machine. I have also created a service to access my MSSQL database engine which at outside the cluster but in my local machine.
Here is my configuration file.
My local ip address is 192.168.8.100.
apiVersion: v1
kind: Service
metadata:
name: mssql
spec:
ports:
- protocol: TCP
port: 3050
targetPort: 1433
apiVersion: v1
kind: Endpoints
metadata:
name: mssql
subsets:
- addresses:
- ip: "192.168.8.100"
ports:
- port: 1433
Pod Connection String
Server=mssql\MSSQLSERVER2017,3050;Database=product-db;User Id=sa;Password=pwd#123;MultipleActiveResultSets=true;
But with the above configuration, it doesn't seems working and throw a connection error.
Can someone please tell me where I am doing wrong.
Thanks in advance.
I need to know how to connect my Kubernetes cluster to an external SQL Server database running in a docker image outside of the Kubernetes cluster.
I currently have two pods in my cluster that are running, each has a different image in it created from asp.net core applications. There is a completely separate (outside of Kubernetes but running locally on my machine localhost,1433) docker image that hosts a SQL Server database. I need the applications in my Kubernetes pods to be able to reach and manipulate that database. I have tried creating a YAML file and configuring different ports but I do not know how to get this working, or how to test that it actually is working after setting it up. I need the exact steps/commands to create a service capable of routing a connection from the images in my cluster to the DB and back.
Docker SQL Server creation (elevated powershell/docker desktop):
docker pull mcr.microsoft.com/mssql/server:2017-latest
docker run -d -p 1433:1433 --name sql -v "c:/Temp/DockerShared:/host_mount" -e SA_PASSWORD="aPasswordPassword" -e ACCEPT_EULA=Y mcr.microsoft.com/mssql/server:2017-latest
definitions.yaml
#Pods in the cluster
apiVersion: v1
kind: Pod
metadata:
name: pod-1
labels:
app: podnet
type: module
spec:
containers:
- name: container1
image: username/image1
---
apiVersion: v1
kind: Pod
metadata:
name: pod-2
labels:
app: podnet
type: module
spec:
containers:
- name: container2
image: username/image2
---
#Service created in an attempt to contact external SQL Server DB
apiVersion: v1
kind: Service
metadata:
name: ext-sql-service
spec:
ports:
- port: 1433
targetPort: 1433
type: ClusterIP
---
apiVersion: v1
kind: Endpoints
metadata:
name: ext-sql-service
subsets:
- addresses:
- ip: (Docker IP for DB Instance)
ports:
- port: 1433
Ideally I would like applications in my kubernetes cluster to be able to manipulate the SQL Server I already have set up (running outside of the cluster but locally on my machine).
When running from local docker, you connection string is NOT your local machine.
It is the local docker "world", that happens to be running on your machine.
host.docker.internal:1433
The above is docker container talking to your local machine. Obviously, the port could be different based on how you exposed it.
......
If you're trying to get your running container to talk to sql-server which is ALSO running inside of the docker world, that connection string looks like:
ServerName:
my-mssql-service-deployment-name.$_CUSTOMNAMESPACENAME.svc.cluster.local
Where $_CUSTOMNAMESPACENAME is probably "default", but you may be running a different namespace.
my-mssql-service-deployment-name is the name of YOUR deployment (I have it stubbed here)
Note there is no port number here.
This is documented here:
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services
Problem may be in kind of service you put. ClusterIP enable you juest to connect among pods inside cluster.
To connect to external service you should just change definition of service kind as NodePort.
Try to change service definition:
#Service created in an attempt to contact external SQL Server DB
apiVersion: v1
kind: Service
metadata:
name: ext-sql-service
spec:
type: NodePort
ports:
- port: 1433
targetPort: 1433
and execute command:
$ kubectl apply -f your_service_definition_file_name.yaml
Remember to run this command in proper namespace, where your deployment is configured.
Bad practice is to overlay an environment variable onto the container. And with "docker run" pass that environment variable VALUE to the container.
Of course in context of executing docker command
$ docker run -d -p 1433:1433 --name sql -v "c:/Temp/DockerShared:/host_mount" -e SA_PASSWORD="aPasswordPassword" -e ACCEPT_EULA=Y mcr.microsoft.com/mssql/server:2017-latest
Putting the db-password visible is insecure. Use Kubernetes secrets.
More information you can find here: kubernetes-secret.