Can not connect to SQL Server database hosted on localhost from Kubernetes, how can I debug this? - sql-server

I am trying to deploy an asp.net core 2.2 application in Kubernetes. This application is a simple web page that need an access to an SQL Server database to display some information. This database is hosted on my local development computer (localhost) and the web application is deployed in a minikube cluster to simulate the production environment where my web application could be deployed in a cloud and access a remote database.
I managed to display my web application by exposing port 80. However, I can't figure out how to make my web application connect to my SQL Server database hosted on my local computer from inside the cluster.
I assume that my connection string is correct since my web application can connect to the SQL Server database when I deploy it on an IIS local server, in a docker container (docker run) or a docker service (docker create service) but not when it is deployed in a Kubernetes cluster. I understand that the cluster is in a different network so I tried to create a service without selector as described in this question, but no luck... I even tried to change the connection string IP address to match the one of the created service but it failed too.
My firewall is setup to accept inbound connection to 1433 port.
My SQL Server database is configured to allow remote access.
Here is the connection string I use:
"Server=172.24.144.1\\MyServer,1433;Database=TestWebapp;User Id=user_name;Password=********;"
And here is the file I use to deploy my web application:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webapp
spec:
replicas: 1
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: <private_repo_url>/webapp:db
imagePullPolicy: Always
ports:
- containerPort: 80
- containerPort: 443
- containerPort: 1433
imagePullSecrets:
- name: gitlab-auth
volumes:
- name: secrets
secret:
secretName: auth-secrets
---
apiVersion: v1
kind: Service
metadata:
name: webapp
labels:
app: webapp
spec:
type: NodePort
selector:
app: webapp
ports:
- name: port-80
port: 80
targetPort: 80
nodePort: 30080
- name: port-443
port: 443
targetPort: 443
nodePort: 30443
---
apiVersion: v1
kind: Service
metadata:
name: sql-server
labels:
app: webapp
spec:
ports:
- name: port-1433
port: 1433
targetPort: 1433
---
apiVersion: v1
kind: Endpoints
metadata:
name: sql-server
labels:
app: webapp
subsets:
- addresses:
- ip: 172.24.144.1 <-- IP of my local computer where SQL Server is running
ports:
- port: 1433
So I get a deployment named 'webapp' with only one pod, two services named 'webapp' and 'sql-server' and two endpoints also named 'webapp' and 'sql-server'. Here are their details:
> kubectl describe svc webapp
Name: webapp
Namespace: default
Labels: app=webapp
Annotations: <none>
Selector: app=webapp
Type: NodePort
IP: 10.108.225.112
Port: port-80 80/TCP
TargetPort: 80/TCP
NodePort: port-80 30080/TCP
Endpoints: 172.17.0.4:80
Port: port-443 443/TCP
TargetPort: 443/TCP
NodePort: port-443 30443/TCP
Endpoints: 172.17.0.4:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
> kubectl describe svc sql-server
Name: sql-server
Namespace: default
Labels: app=webapp
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 10.107.142.32
Port: port-1433 1433/TCP
TargetPort: 1433/TCP
Endpoints:
Session Affinity: None
Events: <none>
> kubectl describe endpoints webapp
Name: webapp
Namespace: default
Labels: app=webapp
Annotations: <none>
Subsets:
Addresses: 172.17.0.4
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
port-443 443 TCP
port-80 80 TCP
Events: <none>
> kubectl describe endpoints sql-server
Name: sql-server
Namespace: default
Labels: app=webapp
Annotations: <none>
Subsets:
Addresses: 172.24.144.1
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
<unset> 1433 TCP
Events: <none>
I am expecting to connect to the SQL Server database but when my application is trying to open the connection I get this error:
SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 40 - Could not open a connection to SQL Server)
I am new with Kubernetes and I am not very comfortable with networking so any help is welcome.
The best help would be to give me some advices/tools to debug this since I don't even know where or when the connection attempt is blocked...
Thank you!

What you consider the IP address of your host is a private IP for an internal network. It is possible that this IP address is the one that your machine uses on the "real" network you are using. The kubernetes virtual network is on a different subnet and thus - the IP that you use internally is not accessible.
subsets:
- addresses:
- ip: 172.24.144.1 <-- IP of my local computer where SQL Server is running
ports:
- port: 1433
You can connect via the DNS entry host.docker.internal
Read more here and here for windows
I am not certain if that works in minicube - there used to be a different DNS name for linux/windows implementations for the host.
If you want to use the IP (bear in mind it would change eventually), you can probably track it down and ensure it is the one "visible" from withing the virtual subnet.
PS : I am using the kubernetes that go on with docker now, seems easier to work with.

Related

How to route mssql traffic through an Istio egress gateway

I am trying to run a .Net 6 demonstration application in an Istio service mesh (Istio 1.16.1 in an AKS cluster). This application uses a sqlserver 2019 located outside the cluster and I would like to route all outgoing traffic, including mssql, through an egress gateway.
Please note this application also use OpenId Connect and use keytabs (Kerberos traffic), I have successfully managed to route those requests through the egress gateway but not the mssql traffic.
I have created the service mesh with istioctl and the following configuration file
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: default
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
defaultConfig:
tracing:
sampling: 100
outboundTrafficPolicy:
mode: REGISTRY_ONLY
components:
pilot:
k8s:
nodeSelector:
agentpool: svcmaster
ingressGateways:
- name: istio-ingressgateway
enabled: true
k8s:
nodeSelector:
kubernetes.io/os: linux
egressGateways:
- name: istio-egressgateway
enabled: true
k8s:
nodeSelector:
kubernetes.io/os: linux
Here is the ServiceEntry for the database
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: mssql-contoso-com
namespace: linux
spec:
hosts:
- mssql.contoso.com
addresses:
- 10.1.0.5
ports:
- number: 1433
name: mssql
protocol: TLS
- number: 443
name: tls
protocol: TLS
location: MESH_EXTERNAL
resolution: DNS
Here is the gateway (it includes the host
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: egress-gateway
namespace: linux
spec:
selector:
istio: egressgateway
servers:
- port:
number: 443
name: tls
protocol: TLS
hosts:
- "adfs.contoso.com"
- "mssql.contoso.com"
tls:
mode: "PASSTHROUGH"
- port:
number: 80
name: tcp
protocol: TCP
hosts:
- "controller.contoso.com"
And finally, the VirtualService. I have not defined a DestinationRule because it is actually useless, the OIDC and Kerberos traffic are routed correctly without them and I have tried to add it out of desperation without solving the issue.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: "outgoing-mssql-traffic"
namespace: linux
spec:
hosts:
- mssql.contoso.com
gateways:
- egress-gateway
- mesh
tls:
- match:
- gateways:
- mesh
port: 1433
sniHosts:
- mssql.contoso.com
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
port:
number: 443
weight: 100
- match:
- gateways:
- egress-gateway
port: 443
sniHosts:
- mssql.contoso.com
route:
- destination:
host: mssql.contoso.com
port:
number: 1433
weight: 100
Regarding the details of the application call to the SQL Server, I am using a regular SQLConnection with the following connection string:
Server=mssql.contoso.com;Initial Catalog=Demonstration;Integrated Security=true;TrustServerCertificate=true
As a result, I get the following error in the application log:
Microsoft.Data.SqlClient.SqlException (0x80131904): A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: TCP Provider, error: 35 - An internal exception was caught)
---> System.IO.IOException: Unable to read data from the transport connection: Connection reset by peer.
---> System.Net.Sockets.SocketException (104): Connection reset by peer
at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 count)
Somehow the TLS handshake fails. When consulting the logs of the sidecar container and of the egress gateway, I cannot see the traffic to the database. I have also monitored the traffic on the SQLServer machine with Wireshark and I cannot see TCP traffic on port 1433.
The application works fine when the virtual service is deleted so the issue is really related to the routing through the egress gateway.
Any help or insight would be appreciated.
So I found out the reason a bit by accident while trying different configurations and checking the Envoy logs. The Istio Egress gateway service exposes only two ports (at least by default): 80 and 443. It seems that each port can only serve one VirtualService and it is first come first served. I had declared the VirtualService for mssql last, so it was not served.
Furthermore, I had to switch to a TCP route. I failed to get it working with a TLS route.
Here is the ServiceEntry:
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: mssql-contoso-com
namespace: linux
spec:
hosts:
- mssql.contoso.com
addresses:
- 10.1.0.5
ports:
- number: 1433
name: mssql
protocol: TCP
- number: 80
name: tls
protocol: TCP
location: MESH_EXTERNAL
resolution: DNS
And here is the VirtualService:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: "outgoing-mssql-traffic"
namespace: linux
spec:
hosts:
- mssql.contoso.com
gateways:
- egress-gateway
- mesh
tcp:
- match:
- gateways:
- mesh
port: 1433
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
port:
number: 80
weight: 100
- match:
- gateways:
- egress-gateway
port: 80
route:
- destination:
host: mssql.contoso.com
port:
number: 1433
weight: 100
I guess that I will have to either deploy several egress gateways or find a way to add ports to the existing one.

Istio istio-ingressgateway throwing "no cluster match for URL '/'"

I have Istio installed on docker-desktop. In general it works fine. I'm attempting to setup an http-based match on a very simple virtual service, but I'm only able to get 404s. Here are the technical details.
My endpoint image is hashi http-echo which uses the net/http library to create a trivial http server that returns a message you supply. It works just fine and couldn't be more trivial.
Here is my pod and service configuration:
kind: Pod
apiVersion: v1
metadata:
name: a
labels:
app: a
version: v1
spec:
containers:
- name: a
image: hashicorp/http-echo
args:
- "-text='this is service a: v1'"
- "-listen=:6789"
---
kind: Service
apiVersion: v1
metadata:
name: a-service
spec:
selector:
app: a
version: v1
ports:
# Default port used by the image
- port: 6789
targetPort: 6789
name: http-echo
And here is an example of the service working by my curling it from another pod in the same namespace:
/ # curl 10.1.0.29:6789
'this is service a: v1'
And here's the pod running in the docker-desktop cluster:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
a 2/2 Running 0 45h 10.1.0.29 docker-desktop <none> <none>
And here is the service registering and administrating the pod:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
a-service ClusterIP 10.101.113.9 <none> 6789/TCP 45h app=a,version=v1
Here is my istio istio-ingressgateway pod specification via Helm (seems to work fine) which I list as this is the only part of the installation I've changed and the change itself is utterly trivial (just add a single new port block which seems to work fine as listening is indeed occurring):
gateways:
istio-ingressgateway:
name: istio-ingressgateway
labels:
app: istio-ingressgateway
istio: ingressgateway
ports:
- port: 15021
targetPort: 15021
name: status-port
protocol: TCP
- port: 80
targetPort: 8080
name: http2
protocol: TCP
- port: 443
targetPort: 8443
name: https
protocol: TCP
- port: 6789
targetPort: 6789
name: http-echo
protocol: TCP
And here is the kubectl get svc on the istio-ingressgateway just to show that indeed I have an external-ip and things look normal:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
istio-ingressgateway LoadBalancer 10.109.63.15 localhost 15021:30095/TCP,80:32454/TCP,443:31644/TCP,6789:30209/TCP 2d16h app=istio-ingressgateway,istio=ingressgateway
istiod ClusterIP 10.96.155.154 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 2d16h app=istiod,istio=pilot
Here's my virtualservice:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: a-service
namespace: default
spec:
hosts:
- 'a-service.default.svc.cluster.local'
gateways:
- gateway
http:
- match:
- port: 6789
route:
- destination:
host: 'a-service.default.svc.cluster.local'
port:
number: 6789
Here's my gateway:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
namespace: default
spec:
selector:
app: istio-ingressgateway
servers:
- port:
number: 6789
name: http-echo
protocol: http
hosts:
- 'a-service.default.svc.cluster.local'
And then finally here's a debug log from the istio-ingressgateway showing that despite all these seemingly correct pod, service, gateway, virtualservice and ingressgateway configs, the ingressgateway is only return 404s:
2021-09-27T15:34:41.001773Z debug envoy connection [C367] closing data_to_write=143 type=2
2021-09-27T15:34:41.001779Z debug envoy connection [C367] setting delayed close timer with timeout 1000 ms
2021-09-27T15:34:41.001786Z debug envoy pool [C7] response complete
2021-09-27T15:34:41.001791Z debug envoy pool [C7] destroying stream: 0 remaining
2021-09-27T15:34:41.001925Z debug envoy connection [C367] write flush complete
2021-09-27T15:34:41.002215Z debug envoy connection [C367] remote early close
2021-09-27T15:34:41.002279Z debug envoy connection [C367] closing socket: 0
2021-09-27T15:34:41.002348Z debug envoy conn_handler [C367] adding to cleanup list
2021-09-27T15:34:41.179213Z debug envoy conn_handler [C368] new connection from 192.168.65.3:62904
2021-09-27T15:34:41.179594Z debug envoy http [C368] new stream
2021-09-27T15:34:41.179690Z debug envoy http [C368][S14851390862777765658] request headers complete (end_stream=true):
':authority', '0:6789'
':path', '/'
':method', 'GET'
'user-agent', 'curl/7.64.1'
'accept', '*/*'
'version', 'TESTING'
2021-09-27T15:34:41.179708Z debug envoy http [C368][S14851390862777765658] request end stream
2021-09-27T15:34:41.179828Z debug envoy router [C368][S14851390862777765658] no cluster match for URL '/'
2021-09-27T15:34:41.179903Z debug envoy http [C368][S14851390862777765658] Sending local reply with details route_not_found
2021-09-27T15:34:41.179949Z debug envoy http [C368][S14851390862777765658] encoding headers via codec (end_stream=true):
':status', '404'
'date', 'Mon, 27 Sep 2021 15:34:41 GMT'
'server', 'istio-envoy'
Here's istioct proxy-status:
istioctl proxy-status ⎈ docker-desktop/istio-system
NAME CDS LDS EDS RDS ISTIOD VERSION
a.default SYNCED SYNCED SYNCED SYNCED istiod-b9c8c9487-clkkt 1.11.3
istio-ingressgateway-5797689568-x47ck.istio-system SYNCED SYNCED SYNCED SYNCED istiod-b9c8c9487-clkkt 1.11.3
And here's istioctl pc cluster $ingressgateway:
SERVICE FQDN PORT SUBSET DIRECTION TYPE DESTINATION RULE
BlackHoleCluster - - - STATIC
a-service.default.svc.cluster.local 6789 - outbound EDS
agent - - - STATIC
istio-ingressgateway.istio-system.svc.cluster.local 80 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 443 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 6789 - outbound EDS
istio-ingressgateway.istio-system.svc.cluster.local 15021 - outbound EDS
istiod.istio-system.svc.cluster.local 443 - outbound EDS
istiod.istio-system.svc.cluster.local 15010 - outbound EDS
istiod.istio-system.svc.cluster.local 15012 - outbound EDS
istiod.istio-system.svc.cluster.local 15014 - outbound EDS
kube-dns.kube-system.svc.cluster.local 53 - outbound EDS
kube-dns.kube-system.svc.cluster.local 9153 - outbound EDS
kubernetes.default.svc.cluster.local 443 - outbound EDS
prometheus_stats - - - STATIC
sds-grpc - - - STATIC
xds-grpc - - - STATIC
zipkin - - - STRICT_DNS
And here's istioctl pc listeners on the same ingress:
ADDRESS PORT MATCH DESTINATION
0.0.0.0 6789 ALL Route: http.6789
0.0.0.0 15021 ALL Inline Route: /healthz/ready*
0.0.0.0 15090 ALL Inline Route: /stats/prometheus*
And finally here's istioctl routes:
NOTE: This output only contains routes loaded via RDS.
NAME DOMAINS MATCH VIRTUAL SERVICE
http.6789 a-service.default.svc.cluster.local /* a-service.default
* /stats/prometheus*
* /healthz/ready*
I've tried numerous different configurations from changing selectors, to making sure port names match to trying different ports. If I change my virtualservice from http to tcp the port match works great. But because my ultimate goal with this is to do more advanced header-based matching I need to be matching on http. Any insight would be greatly appreciated!
It turned out the problem was that I had specified my service in my hosts directive in both my gateway and virtualservice. Specifying a service as a hosts entry is almost certainly never correct, though one can "workaround" this by adding a host header to curl, i.e. curl ... -H 'Host: kubernetes.docker.internal' .... But the correct solution is to simply add correct host entries, i.e. - mysite.mycompany.com etc. Hosts in this case are like vhosts in Apache; they're an fqdn that resolves to something the mesh and cluster can use to send requests to. host, however, in virtualservice destination is the service, which is a bit convoluted and is what threw me.

Minikube Pod Connect to External Database

I have created a sample application which needs be run inside a kubernetes cluster. For now I am trying to replicate the same environment in my local machine by using Minikube.
Here I have a .Net Core WebAPI service which need to be connect to a MSSQL database. The service is running inside the cluster my the database is in my local machine. I have also created a service to access my MSSQL database engine which at outside the cluster but in my local machine.
Here is my configuration file.
My local ip address is 192.168.8.100.
apiVersion: v1
kind: Service
metadata:
name: mssql
spec:
ports:
- protocol: TCP
port: 3050
targetPort: 1433
apiVersion: v1
kind: Endpoints
metadata:
name: mssql
subsets:
- addresses:
- ip: "192.168.8.100"
ports:
- port: 1433
Pod Connection String
Server=mssql\MSSQLSERVER2017,3050;Database=product-db;User Id=sa;Password=pwd#123;MultipleActiveResultSets=true;
But with the above configuration, it doesn't seems working and throw a connection error.
Can someone please tell me where I am doing wrong.
Thanks in advance.

Unable to connect to api server of react js app using kubernetes?

I am basically trying to run a react js app which is mainly composed of 3 services namely postgres db, API server and UI frontend(served using nginx).Currently the app works as expected in the development mode using docker-compose but when i tried to run this in the production using kubernetes,I was not able to access the api server of the app(CONNECTION REFUSED).
Since I want to run in this in production using kubernetes, I created yaml files for each of the services and then tried running them using kubectl apply.I have also tried this with and without using the persistent volume for the api server.But none of this helped.
Docker-compose file(This works and i am able to connect to api server at port 8000)
version: "3"
services:
pg_db:
image: postgres
networks:
- wootzinternal
ports:
- 5432
environment:
- POSTGRES_PASSWORD=password
- POSTGRES_USER=postgres
- POSTGRES_DB=wootz
volumes:
- wootz-db:/var/lib/postgresql/data
apiserver:
image: wootz-backend
volumes:
- ./api:/usr/src/app
- /usr/src/app/node_modules
build:
context: ./api
dockerfile: Dockerfile
networks:
- wootzinternal
depends_on:
- pg_db
ports:
- '8000:8000'
ui:
image: wootz-frontend
volumes:
- ./client:/usr/src/app
- /usr/src/app/build
- /usr/src/app/node_modules
build:
context: ./client
dockerfile: Dockerfile
networks:
- wootzinternal
ports:
- '80:3000'
volumes:
wootz-db:
networks:
wootzinternal:
driver: bridge
My api server yaml for running in kubernetes(This doesn't work and I cant connect to the api server at port 8000)
apiVersion: v1
kind: Service
metadata:
name: apiserver
labels:
app: apiserver
spec:
ports:
- name: apiport
port: 8000
targetPort: 8000
selector:
app: apiserver
tier: backend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: apiserver
labels:
app: apiserver
spec:
selector:
matchLabels:
app: apiserver
tier: backend
strategy:
type: Recreate
template:
metadata:
labels:
app: apiserver
tier: backend
spec:
containers:
- image: suji165475/devops-sample:corspackedapi
name: apiserver
env:
- name: POSTGRES_DB_USER
value: postgres
- name: POSTGRES_DB_PASSWORD
value: password
- name: POSTGRES_DB_HOST
value: postgres
- name: POSTGRES_DB_PORT
value: "5432"
ports:
- containerPort: 8000
name: myport
What changes should I make to my api server yaml(kubernetes). so that I can access it on port 8000. Currently I am getting a connection refused error.
The default service on Kubernetes is ClusterIP that is used to have service inside the cluster but not having that exposed to outside.
That is your service using the LoadBalancer type:
apiVersion: v1
kind: Service
metadata:
name: apiserver
labels:
app: apiserver
spec:
type: LoadBalancer
ports:
- name: apiport
port: 8000
targetPort: 8000
selector:
app: apiserver
tier: backend
With that, you can see how the service expects to have an external IP address by running kubectl describe service apiserver
In case you want to have more control of how your requests are routed to that service you can add an Ingress in front of that same service.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: apiserver
name: apiserver
spec:
rules:
- host: apiserver.example.com
http:
paths:
- backend:
serviceName: apiserver
servicePort: 8000
path: /*
Your service in only exposed over the internal kubernetes network.
This is because if you do not specify a spec.serviceType, the default is ClusterIP.
To expose your application you can follow at least 3 ways:
LoadBalancer: you can specify a spec.serviceType: LoadBalancer. A Load Balancer service expose your application on the (public) network. This work great if your cluster is a cloud service (gke, digital ocean, aks, azure, ...), the cloud will take care of providing you the public ip and routing the network traffic to all your nodes. Usually this is not the best method because a cloud Load balancer has a cost (depends on the cloud) and if you need to expose a lot of services the situation could become difficult to be maintained.
NodePort: you can specify a spec.serviceType: NodePort. Exposes the Service on each Node’s IP at a static port (the NodePort).
You’ll be able to contact the service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
Ingress: Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. This is the most common scenario for simple http/https application. It allow you to easily manage ssl termination and routing.
You need to deploy an ingress controller to make this work, like a simple nginx. All the main cloud can do this for you with a simple setting when you create the cluster
Read here for more information about services
Read here for more information about ingress

Cannot access exposed Dockerized React app on Kubernetes

I'm trying to deploy my Dockerized React app to Kubernetes. I believe I've dockerized it correctly, but i'm having trouble accessing the exposed pod.
I don't have experience in Docker or Kubernetes, so any help would be appreciated.
My React app is just static files (from npm run build) being served from Tomcat.
My Dockerfile is below. In summary, I put my app in the Tomcat folder and expose port 8080.
FROM private-docker-registry.com/repo/tomcat:latest
EXPOSE 8080:8080
# Copy build directory to Tomcat webapps directory
RUN mkdir -p /tomcat/webapps/app
COPY /build/sample-app /tomcat/webapps/app
# Create a symbolic link to ROOT -- this way app starts at root path
(localhost:8080)
RUN ln -s /tomcat/webapps/app /tomcat/webapps/ROOT
# Start Tomcat
ENTRYPOINT ["catalina.sh", "run"]
I build and pushed the Docker image to the Private Docker Registry.
I verified that container runs correctly by running it like this:
docker run -p 8080:8080 private-docker-registry.com/repo/sample-app:latest
Then, if I go to localhost:8080, I see the homepage of my React app.
Now, the trouble I'm having is deploying to Kubernetes and accessing the app externally.
Here's my deployment.yaml file:
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: sample-app
namespace: dev
labels:
app: sample-app
spec:
replicas: 1
selector:
matchLabels:
app: sample-app
template:
metadata:
labels:
app: sample-app
spec:
containers:
- name: sample-app
image: private-docker-registry.com/repo/sample-app:latest
ports:
- containerPort: 8080
protocol: TCP
nodeSelector:
TNTRole: luxkube
---
kind: Service
apiVersion: v1
metadata:
name: sample-app
labels:
app: sample-app
spec:
selector:
app: sample-app
type: NodePort
ports:
- port: 80
targetPort: 8080
protocol: TCP
I created the deployment and service by running
kubectl --namespace=dev create -f deployment.yaml
Output of 'describe deployment'
Name: sample-app
Namespace: dev
CreationTimestamp: Sat, 21 Jul 2018 12:27:30 -0400
Labels: app=sample-app
Annotations: deployment.kubernetes.io/revision=1
Selector: app=sample-app
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=sample-app
Containers:
sample-app:
Image: private-docker-registry.com/repo/sample-app:latest
Port: 8080/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: sample-app-bb6f59b9 (1/1 replicas created)
Events: <none>
Output of 'describe service'
Name: sample-app
Namespace: fab-dev
Labels: app=sample-app
Annotations: <none>
Selector: app=sample-app
Type: NodePort
IP: 10.96.29.199
Port: <unset> 80/TCP
TargetPort: 8080/TCP
NodePort: <unset> 34604/TCP
Endpoints: 192.168.138.145:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Now I don't know which IP and port I should be using to access the app.
I have tried every combination but none has loaded my app. I believe the port should be 80, so if I just have the IP, i shuold be able to go to the browser and access the React app by going to http://.
Does anyone have suggestions?
The short version is that the Service is listening on the same TCP/IP port on every Node in your cluster (34604) as is shown in the output of describe service:
NodePort: <unset> 34604
If you wish to access the application through a "nice" URL, you'll want a load balancer that can translate the hostname into the in-cluster IP and port combination. That's what an Ingress controller is designed to do, but it isn't the only way -- changing the Service to be type: LoadBalancer will do that for you, if you're running in a cloud environment where Kubernetes knows how to programmatically create load balancers for you.
I believe you found the answer by now :), I landed here as I was facing this issue. Solved for self, hope this helps everyone.
Here's what can help:
Deploy your app (say: react-app).
Run below command:
kubectl expose deployment <workload> --namespace=app-dev --name=react-app --type=NodePort --port=3000 output: service/notesui-app exposed
Publish the service port as 3000, Target Port 3000, Node Port (auto selected 32250)
kubectl get svc react-app --namespace=notesui-dev
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
react-app NodePort 10.23.22.55 <none> 3000:32250/TCP 48m
Yaml: (sample)
apiVersion: v1
kind: Service
name: react-app
namespace: app-dev
spec:
selector: <workload>
ports:
- nodePort: 32250
port: 3000
protocol: TCP
targetPort: 3000
type: NodePort
status: {}
Access the app on browser:
http://<Host>:32250/index
is your node ip where pod is running.
If you have app running in multiple nodes (scaled). It is a NodePort setting on every node.
App can be accessed:
http://<Host1>:32250/index
http://<Host2>:32250/index

Resources