I have a frontend React application and a Go backend service as the API for the frontend. Both are Kubernetes services in the same namespace. How can I communicate with the Go backend service without having to use an external IP? I got it to work with an external ip, but, I can't get the fqdn to resolve correctly like it should. The frontend service is built from the nginx:1.15.2-alpine docker image. How can I get the frontend React app to communicate with the backend Go server?
Frontend service.yaml:
apiVersion: v1
kind: Service
metadata:
name: ui
namespace: client
labels:
app: ui
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
selector:
app: ui
Frontend deployment.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ui
namespace: client
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 2
template:
metadata:
labels:
app: ui
spec:
containers:
- name: ui
image: #######
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
Backend service.yaml:
apiVersion: v1
kind: Service
metadata:
name: api
namespace: client
labels:
app: api
spec:
type: NodePort
ports:
- port: 8001
protocol: TCP
targetPort: http
name: http
selector:
app: api
Backend deployment.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: api
namespace: client
labels:
name: api
spec:
replicas: 1
revisionHistoryLimit: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: ####
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8001
The React application does not run in Kubernetes. Maybe you have a dev server running in Kubernetes, but it just serves up HTML and Javascript files to a browser running outside the cluster. The application in the browser has no idea about this "Kubernetes" thing and can't resolve the Kubernetes-internal ...svc.cluster.local hostnames; it needs a way to talk back to the cluster.
Since you have the backend configured as a NodePort type service, you can look up the backend's externally visible port, then configure the backend URL in the served browser application to be that port number on some node in your cluster. This is a little bit messy and manual.
A better path is to configure an ingress so that, for example, https://.../ serves your browser application and https://.../api goes to your back-end. Then the backend URL can just be the bare path /api, and it will be interpreted with the same host name and scheme as the UI.
There are many issues with the yamls. First, in service yamls, the targetPort should be port numbers(integers) and not string. So, the updated config will be,
apiVersion: v1
kind: Service
metadata:
name: ui
namespace: client
labels:
app: ui
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
selector:
app: ui
and
apiVersion: v1
kind: Service
metadata:
name: api
namespace: client
labels:
app: api
spec:
type: NodePort
ports:
- port: 8001
protocol: TCP
targetPort: 8001
name: http
selector:
app: api
After changing the targetPort in service yamls, I created a pod to do nslookup and it works as expected.
kubectl apply -f https://k8s.io/examples/admin/dns/busybox.yaml
kubectl exec -ti busybox -- nslookup api.client
produces the output
Defaulting container name to busybox.
Use 'kubectl describe pod/busybox -n default' to see all of the containers in this pod.
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: api.client
Address 1: 10.101.84.21 api.client.svc.cluster.local
Related
I have 2 deployments + services running on Azure: react client and nodejs auth.
I have registered a public IP on Azure which I added to my windows host file (= myexample.com).
Typing the URL in the browser, the client opens and requests go to auth service as expected.
Now I want to run the client locally (with npm start) but connect to auth service still running on Azure.
I removed the client from the cloud deployment (= the deployment+the service) and use the domain (=myexample.cloud) as the base URL in my axios client in my React client. To confirm, on Azure my ingress-nginx-controller of type Load_Balancer shows the aforementioned public IP as its external IP plus ports 80:30819/TCP,443:31077/TCP.
When I ran the Client locally, it shows the correct request URL (http://myexample.cloud/api/users/signin) but I get a 403 Forbidden answer.
What am I missing? I should be able to connect to my cloud service by using the public IP? There error is caused by my client because Azure is not putting road blocks in place. I mean it is a pubic IP, correct?
Update 1
Just to clarify, the 403 Forbidden is not caused by me trying to sign in with incorrect credentials. I have another api/users/health-ckeck route that is giving me the same error
My cloud ingress deployment. I have also tried to remove the client part (last 7 lines) to no effect.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: myexample.cloud
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
- path: /
pathType: Prefix
backend:
service:
name: client-srv
port:
number: 3000
my client cloud deployment+service that worked when client was running in cloud
apiVersion: apps/v1
kind: Deployment
metadata:
name: client
spec:
replicas: 1
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
containers:
- name: client
image: client
---
apiVersion: v1
kind: Service
metadata:
name: client
spec:
selector:
app: client
ports:
- name: client
protocol: TCP
port: 3000
targetPort: 3000
my auth deployment + service
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: auth
apiVersion: v1
kind: Service
metadata:
name: auth
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
The problem was actually CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin'
but my browser did not tell me.
After switching from Chrome to Firefox, the problem became apperant.
I had to add annotations to my ingress controller as described here: express + socket.io + kubernetes Access-Control-Allow-Origin' header
I've a simple HelloWorld ReactJs application Docker image and I created the deployment as:
kind: Deployment
apiVersion: apps/v1
metadata:
name: minikube-react-app
spec:
replicas: 2
selector:
matchLabels:
app: minikube-react-app
template:
metadata:
labels:
app: minikube-react-app
spec:
containers:
- name: minikube-react-app
image: hello-react:1.0.1
imagePullPolicy: Never
ports:
- containerPort: 80
resources:
requests:
memory: "100Mi"
cpu: "300m"
limits:
memory: "200Mi"
cpu: "600m"
restartPolicy: Always
---
kind: Service
apiVersion: v1
metadata:
name: minikube-react-app
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
nodePort: 31000
selector:
app: minikube-react-app
I ran, kubectl apply -f deployent.yaml
But when I access http://localhost:31000 it's not working(This site can’t be reached).
Can someone help me with this please?
Run the following command to get the actual address to connect with your app from host machine.
minikube service --url <service-name>
Ref: https://minikube.sigs.k8s.io/docs/handbook/accessing/
Should use 'node ip' instead of 'localhost' to acccess node port.
Run shell minikube ip to obtain ip of minikube node.
Check the service type: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
I'm trying to deploy a simple Angular/Express app on GKE and the http requests from the frontend can't find the name of the express app.
Here's an example of one get requests. I changed the request from 'localhost' to 'express' because that is the name of the clusterIP service setup in the cluster. Also, I'm able to curl this url from the angular pod and get json returned as expected.
getPups(){
this.http.get<{message:string, pups: any}>("http://express:3000/pups")
.pipe(map((pupData)=>{
return pupData.pups.map(pup=>{
return{
name: pup.name,
breed: pup.breed,
quote: pup.quote,
id: pup._id,
imagePath: pup.imagePath,
rates: pup.rates
}
});
}))
.subscribe((transformedPups)=>{
this.pups = transformedPups
this.pupsUpdated.next([...this.pups])
});
}
Here's the angular deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: puprate-deployment
spec:
replicas: 1
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
containers:
- name: puprate
image: brandonjones085/puprate
ports:
- containerPort: 4200
---
apiVersion: v1
kind: Service
metadata:
name: puprate-cluster-ip-service
spec:
type: ClusterIP
selector:
component: web
ports:
- port: 4200
targetPort: 4200
And the express deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: express
spec:
replicas: 3
selector:
matchLabels:
component: server
template:
metadata:
labels:
component: server
spec:
containers:
- name: server
image: brandonjones085/puprate-express
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: express
spec:
type: ClusterIP
selector:
component: server
ports:
- port: 3000
targetPort: 3000
Your frontend app is making the call from outside your cluster, and therefor needs a way to reach it. Because you are serving http, the best way to set that up will be with an ingress.
First, make sure you have an ingress controller set up in your cluster ( e.g. nginx ingress controller) https://kubernetes.github.io/ingress-nginx/deploy/#gce-gke
Then, set up your express with a service (from your question, I see you already have that set up on port 3000, that's good, though in the service I recommend to change the port to 80 - though not critical)
With that, set up your ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: express
spec:
rules:
- host: <a domain you own>
http:
paths:
# NOTICE!! have you express app listen for that path, or set up nginx rewrite rules (I recommend the former, it's much easier to understand)
- path: /api
backend:
serviceName: express
servicePort: 3000 # or 80 if you decide to change that
Do the same for your web deployment, so you can serve your frontend directly:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: web
spec:
rules:
- host: <a domain you own>
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 4200 # or 80 if you decide to change that
Notice that both ingresses are using the same host but different paths - that's important for what's coming next
in your angular app, change that:
this.http.get<{message:string, pups: any}>("http://express:3000/pups")
to that:
this.http.get<{message:string, pups: any}>("/api/pups")
Browsers will parse that to <domain in your address bar>/api/pups
Since you are using GKE, when you set up the ingress controller you will generate a load balancer in the google cloud - make sure that has a DNS entry that directs there.
I'm assuming you already own a domain, but if you don't yet, just add the ip you got to your personal hosts file until you get one like so:
<ip of load balancer> <domain you want>
# for example
45.210.10.15 awesome-domain.com
So now, use the browser to go to the domain you own - you should get the frontend served to you - and since you are calling your api with an address that starts with /, your api call will go to the same host, and redirected by your ingress to your express app this time, instead of the frontend server.
Angular is running in your browser, not in the pod inside the cluster.
The requests will originate therefore externally and the URL must point to the Ingress or LoadBalancer of your backend service.
Error while trying to connect React frontend web to nodejs express api server into kubernetes cluster.
Can navigate in browser to http:localhost:3000 and web site is ok.
But can't navigate to http:localhost:3008 as expected (should not be exposed)
My goal is to pass REACT_APP_API_URL environment variable to frontend in order to set axios baseURL and be able to establish communication between front and it's api server.
deploy-front.yml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: gbpd-front
spec:
selector:
matchLabels:
app: gbpd-api
tier: frontend
track: stable
replicas: 2
template:
metadata:
labels:
app: gbpd-api
tier: frontend
track: stable
spec:
containers:
- name: react
image: binomio/gbpd-front:k8s-3
ports:
- containerPort: 3000
resources:
limits:
memory: "150Mi"
requests:
memory: "100Mi"
imagePullPolicy: Always
service-front.yaml
apiVersion: v1
kind: Service
metadata:
name: gbpd-front
spec:
selector:
app: gbpd-api
tier: frontend
ports:
- protocol: "TCP"
port: 3000
targetPort: 3000
type: LoadBalancer
Deploy-back.yaml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: gbpd-api
spec:
selector:
matchLabels:
app: gbpd-api
tier: backend
track: stable
replicas: 3 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: gbpd-api
tier: backend
track: stable
spec:
containers:
- name: gbpd-api
image: binomio/gbpd-back:dev
ports:
- name: http
containerPort: 3008
service-back.yaml
apiVersion: v1
kind: Service
metadata:
name: gbpd-api
spec:
selector:
app: gbpd-api
tier: backend
ports:
- protocol: "TCP"
port: 3008
targetPort: http
I tried many combinations, also tried adding "LoadBalancer" to backservice but nothing...
I can connect perfecto to localhost:3000 and use frontend but frontend can't connect to backend service.
Question 1: What's is the ip/name to use in order to pass REACT_APP_API_URL to fronten correctly?
Question 2: Why is curl localhost:3008 not answering?
After 2 days trying almost everything in k8s official docs... can't figure out what's happening here, so any help will be much appreciated.
kubectl describe svc gbpd-api
Response:
kubectl describe svc gbpd-api
Name: gbpd-api
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"gbpd-api","namespace":"default"},"spec":{"ports":[{"port":3008,"p...
Selector: app=gbpd-api,tier=backend
Type: LoadBalancer
IP: 10.107.145.227
LoadBalancer Ingress: localhost
Port: <unset> 3008/TCP
TargetPort: http/TCP
NodePort: <unset> 31464/TCP
Endpoints: 10.1.1.48:3008,10.1.1.49:3008,10.1.1.50:3008
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I tested your environment, and it worked when using a Nginx image, let's review the environment:
The front-deployment is correctly described.
The front-service exposes it as loadbalancer, meaning your frontend is accessible from outside, perfect.
The back deployment is also correctly described.
The backend-service stays with as ClusterIP in order to be only accessible from inside the cluster, great.
Below I'll demonstrate the communication between front-end and back end.
I'm using the same yamls you provided, just changed the image to Nginx for example purposes, and since it's a http server I'm changing containerport to 80.
Question 1: What's is the ip/name to use in order to pass REACT_APP_API_URL to fronten correctly?
I added the ENV variable to the front deploy as requested, and I'll use it to demonstrate also. You must use the service name to curl, I used the short version because we are working in the same namespace. you can also use the full name: http://gbpd-api.default.svc.cluster.local:3008
Reproduction:
Create the yamls and applied them:
$ cat deploy-front.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: gbpd-front
spec:
selector:
matchLabels:
app: gbpd-api
tier: frontend
track: stable
replicas: 2
template:
metadata:
labels:
app: gbpd-api
tier: frontend
track: stable
spec:
containers:
- name: react
image: nginx
env:
- name: REACT_APP_API_URL
value: http://gbpd-api:3008
ports:
- containerPort: 80
resources:
limits:
memory: "150Mi"
requests:
memory: "100Mi"
imagePullPolicy: Always
$ cat service-front.yaml
cat: cat: No such file or directory
apiVersion: v1
kind: Service
metadata:
name: gbpd-front
spec:
selector:
app: gbpd-api
tier: frontend
ports:
- protocol: "TCP"
port: 3000
targetPort: 80
type: LoadBalancer
$ cat deploy-back.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: gbpd-api
spec:
selector:
matchLabels:
app: gbpd-api
tier: backend
track: stable
replicas: 3
template:
metadata:
labels:
app: gbpd-api
tier: backend
track: stable
spec:
containers:
- name: gbpd-api
image: nginx
ports:
- name: http
containerPort: 80
$ cat service-back.yaml
apiVersion: v1
kind: Service
metadata:
name: gbpd-api
spec:
selector:
app: gbpd-api
tier: backend
ports:
- protocol: "TCP"
port: 3008
targetPort: http
$ kubectl apply -f deploy-front.yaml
deployment.apps/gbpd-front created
$ kubectl apply -f service-front.yaml
service/gbpd-front created
$ kubectl apply -f deploy-back.yaml
deployment.apps/gbpd-api created
$ kubectl apply -f service-back.yaml
service/gbpd-api created
Remember, in Kubernetes the communication is designed to be made between services, because the pods are always recreated when there is a change in the deployment or when the pod fail.
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/gbpd-api-dc5b4b74b-kktb9 1/1 Running 0 41m
pod/gbpd-api-dc5b4b74b-mzpbg 1/1 Running 0 41m
pod/gbpd-api-dc5b4b74b-t6qxh 1/1 Running 0 41m
pod/gbpd-front-66b48f8b7c-4zstv 1/1 Running 0 30m
pod/gbpd-front-66b48f8b7c-h58ds 1/1 Running 0 31m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/gbpd-api ClusterIP 10.0.10.166 <none> 3008/TCP 40m
service/gbpd-front LoadBalancer 10.0.11.78 35.223.4.218 3000:32411/TCP 42m
The pods are the workers, and since they are replaceable by nature, we will connect to a frontend pod to simulate his behaviour and try to connect to the backend service (which is the network layer that will direct the traffic to one of the backend pods).
The nginx image does not come with curl preinstalled, so I will have to install it for demonstration purposes:
$ kubectl exec -it pod/gbpd-front-66b48f8b7c-4zstv -- /bin/bash
root#gbpd-front-66b48f8b7c-4zstv:/# apt update && apt install curl -y
done.
root#gbpd-front-66b48f8b7c-4zstv:/# curl gbpd-api:3008
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
Now let's try using the environment variable that was defined:
root#gbpd-front-66b48f8b7c-4zstv:/# printenv | grep REACT
REACT_APP_API_URL=http://gbpd-api:3008
root#gbpd-front-66b48f8b7c-4zstv:/# curl $REACT_APP_API_URL
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
Considerations:
Question 2: Why is curl localhost:3008 not answering?
Since all yamls are correctly described you must check if image: binomio/gbpd-back:dev is correctly serving on port 3008 as intended.
Since it's not a public image, I can't test it, so I'll give you troubleshooting steps:
just like we logged inside the front-end pod you will have to log into this backend-pod and test curl localhost:3008.
If it's based on a linux distro with apt-get, you can run the commands just like I did on my demo:
get the pod name from backend deploy (example: gbpd-api-6676c7695c-6bs5n)
run kubectl exec -it pod/<POD_NAME> -- /bin/bash
then run apt update && apt install curl -y
and test curl localhost:3008
if no answer run `apt update && apt install net-tools
and test netstat -nlpt, it will have to show you the output of the services running and the respective port, example:
root#gbpd-api-585df9cb4d-xr6nk:/# netstat -nlpt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1/nginx: master pro
If the pod does not return nothing even on this approach, you will have to check the code in the image.
Let me know if you need help after that!
I am basically trying to run a react js app which is mainly composed of 3 services namely postgres db, API server and UI frontend(served using nginx).Currently the app works as expected in the development mode using docker-compose but when i tried to run this in the production using kubernetes,I was not able to access the api server of the app(CONNECTION REFUSED).
Since I want to run in this in production using kubernetes, I created yaml files for each of the services and then tried running them using kubectl apply.I have also tried this with and without using the persistent volume for the api server.But none of this helped.
Docker-compose file(This works and i am able to connect to api server at port 8000)
version: "3"
services:
pg_db:
image: postgres
networks:
- wootzinternal
ports:
- 5432
environment:
- POSTGRES_PASSWORD=password
- POSTGRES_USER=postgres
- POSTGRES_DB=wootz
volumes:
- wootz-db:/var/lib/postgresql/data
apiserver:
image: wootz-backend
volumes:
- ./api:/usr/src/app
- /usr/src/app/node_modules
build:
context: ./api
dockerfile: Dockerfile
networks:
- wootzinternal
depends_on:
- pg_db
ports:
- '8000:8000'
ui:
image: wootz-frontend
volumes:
- ./client:/usr/src/app
- /usr/src/app/build
- /usr/src/app/node_modules
build:
context: ./client
dockerfile: Dockerfile
networks:
- wootzinternal
ports:
- '80:3000'
volumes:
wootz-db:
networks:
wootzinternal:
driver: bridge
My api server yaml for running in kubernetes(This doesn't work and I cant connect to the api server at port 8000)
apiVersion: v1
kind: Service
metadata:
name: apiserver
labels:
app: apiserver
spec:
ports:
- name: apiport
port: 8000
targetPort: 8000
selector:
app: apiserver
tier: backend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: apiserver
labels:
app: apiserver
spec:
selector:
matchLabels:
app: apiserver
tier: backend
strategy:
type: Recreate
template:
metadata:
labels:
app: apiserver
tier: backend
spec:
containers:
- image: suji165475/devops-sample:corspackedapi
name: apiserver
env:
- name: POSTGRES_DB_USER
value: postgres
- name: POSTGRES_DB_PASSWORD
value: password
- name: POSTGRES_DB_HOST
value: postgres
- name: POSTGRES_DB_PORT
value: "5432"
ports:
- containerPort: 8000
name: myport
What changes should I make to my api server yaml(kubernetes). so that I can access it on port 8000. Currently I am getting a connection refused error.
The default service on Kubernetes is ClusterIP that is used to have service inside the cluster but not having that exposed to outside.
That is your service using the LoadBalancer type:
apiVersion: v1
kind: Service
metadata:
name: apiserver
labels:
app: apiserver
spec:
type: LoadBalancer
ports:
- name: apiport
port: 8000
targetPort: 8000
selector:
app: apiserver
tier: backend
With that, you can see how the service expects to have an external IP address by running kubectl describe service apiserver
In case you want to have more control of how your requests are routed to that service you can add an Ingress in front of that same service.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: apiserver
name: apiserver
spec:
rules:
- host: apiserver.example.com
http:
paths:
- backend:
serviceName: apiserver
servicePort: 8000
path: /*
Your service in only exposed over the internal kubernetes network.
This is because if you do not specify a spec.serviceType, the default is ClusterIP.
To expose your application you can follow at least 3 ways:
LoadBalancer: you can specify a spec.serviceType: LoadBalancer. A Load Balancer service expose your application on the (public) network. This work great if your cluster is a cloud service (gke, digital ocean, aks, azure, ...), the cloud will take care of providing you the public ip and routing the network traffic to all your nodes. Usually this is not the best method because a cloud Load balancer has a cost (depends on the cloud) and if you need to expose a lot of services the situation could become difficult to be maintained.
NodePort: you can specify a spec.serviceType: NodePort. Exposes the Service on each Node’s IP at a static port (the NodePort).
You’ll be able to contact the service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
Ingress: Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. This is the most common scenario for simple http/https application. It allow you to easily manage ssl termination and routing.
You need to deploy an ingress controller to make this work, like a simple nginx. All the main cloud can do this for you with a simple setting when you create the cluster
Read here for more information about services
Read here for more information about ingress