My backend services is working great with ingress nginx.
I'm trying without success to add a frontend SPA react app to my ingress.
I did manage to get it work but I can't get both my backend AND front end to works.
My ingress yml is
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
nginx.ingress.kubernetes.io/rewrite-target: /$2
#nginx.ingress.kubernetes.io/add-base-url: "true"
spec:
rules:
- host: accounting.easydeal.dev
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-srv
port:
number: 3000
- host: api.easydeal.dev
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: docker-hello-world-svc
port:
number: 8088
- path: /accounting(/|$)(.*)
pathType: Prefix
backend:
service:
name: accounting-srv
port:
number: 80
- path: /company(/|$)(.*)
pathType: Prefix
backend:
service:
name: dealers-srv
port:
number: 80
With this yml above i'm able to poke my backend like so -> api.easydeal.dev/helloword or
api.easydeal.dev/company/* and it work !.
However my react app (accounting.easydeal.dev) end up with a white page a console log with this error ->
Uncaught SyntaxError: Unexpected token '<'
The only way i'm able to make my react app work is to change rewrite-target: /$2 to / . However doing so prevent to route correctly my other apis.
I did set the homepage for the react app to "." but still have the error and I also try to set path to /?(*) for my front end
here is my dockerfile
# pull the base image
FROM node:alpine
# set the working direction
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
RUN npm install
COPY . ./
EXPOSE 3000
CMD ["npm", "start"]
As pointed in the comments by original poster:
Doing 2 ingress services sold this issue.
The solution to this issue was to create 2 separate Ingress resources.
The underlying issue was that the workload required 2 different nginx.ingress.kubernetes.io/rewrite-target: parameters.
Above annotations can be set per Ingress resource and not per path.
You can create 2 Ingress resources that will be separate entities (will have different annotations) and they will work "together".
More reference can be found in the links below:
Stackoverflow.com: Answer: Apply nginx-ingress annotations at path level
Kubernetes.github.io: Ingress nginx: User guide: Basic usage
Being specific to nginx-ingress:
By default when you provision/deploy NGINX Ingress controller you are telling your Kubernetes cluster to create Service of type LoadBalancer. This Service will requests the IP address from the cloud provider (GKE, EKS, AKS) and will route the traffic from this IP to your Ingress controller where the requests will be evaluated and send further according to your Ingress resources definitions.
A side note!
By default was not used without a reason as there are other methods to expose your Ingress controller to the external traffic. You can read more about it by following below link:
Kubernetes.github.io: Ingress nginx: Deploy: Baremetal
Your Ingress controller will have single IP address to expose your workload:
$ kubectl get service -n ingress-nginx ingress-nginx-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.32.6.63 AA.BB.CC.DD 80:30828/TCP,443:30664/TCP 19m
Ingress resource that are using kubernetes.io/ingress.class: "nginx" will use that address.
Ingress resources created in this way will look like following when issuing:
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
goodbye-ingress goodbye.domain.name AA.BB.CC.DD 80 19m
hello-ingress hello.domain.name AA.BB.CC.DD 80 19m
A second side note!
If you are using a managed Kubernetes cluster, please refer to it's documentation for more reference on using Ingress resources as there could be major differences.
Related
While working with Kubernetes for some months now, I found a nice way to use one single existing domain name and expose the cluster-ip through a sub-domain but also most of the microservices through different sub-sub-domains using the ingress controller.
My ingress example code:
kind: Ingress
apiVersion: networking.k8s.io/v1beta1
metadata:
name: cluster-ingress-basic
namespace: ingress-basic
selfLink: >-
/apis/networking.k8s.io/v1beta1/namespaces/ingress-basic/ingresses/cluster-ingress-basic
uid: 5d14e959-db5f-413f-8263-858bacc62fa6
resourceVersion: '42220492'
generation: 29
creationTimestamp: '2021-06-23T12:00:16Z'
annotations:
kubernetes.io/ingress.class: nginx
managedFields:
- manager: Mozilla
operation: Update
apiVersion: networking.k8s.io/v1beta1
time: '2021-06-23T12:00:16Z'
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:annotations':
.: {}
'f:kubernetes.io/ingress.class': {}
'f:spec':
'f:rules': {}
- manager: nginx-ingress-controller
operation: Update
apiVersion: networking.k8s.io/v1beta1
time: '2021-06-23T12:00:45Z'
fieldsType: FieldsV1
fieldsV1:
'f:status':
'f:loadBalancer':
'f:ingress': {}
spec:
rules:
- host: microname1.subdomain.domain.com
http:
paths:
- pathType: ImplementationSpecific
backend:
serviceName: kylin-job-svc
servicePort: 7070
- host: microname2.subdomain.domain.com
http:
paths:
- pathType: ImplementationSpecific
backend:
serviceName: superset
servicePort: 80
- {}
status:
loadBalancer:
ingress:
- ip: xx.xx.xx.xx
With this configuration:
microname1.subdomain.domain.com is pointing into Apache Kylin
microname2.subdomain.domain.com is pointing into Apache Superset
This way all microservices can be exposed using the same Cluster-Load-Balancer(IP) but the different sub-sub domains.
I tried to do the same for the SQL Server but this is not working, not sure why but I have the feeling that the reason is that the SQL Server communicates using TCP and not HTTP.
- host: microname3.subdomain.domain.com
http:
paths:
- pathType: ImplementationSpecific
backend:
serviceName: mssql-linux
servicePort: 1433
Any ideas on how I can do the same for TCP services?
Your understanding is good, by default NGINX Ingress Controller only supports HTTP and HTTPs traffic configuration (Layer 7) so probably your SQL server is not working because of this.
Your SQL service is operating using TCP connections so it is does not take into consideration custom domains that you are trying to setup as they are using same IP address anyway .
The solution for your issue is not use custom sub-domain(s) for this service but to setup exposing TCP service in NGINX Ingress Controller. For example you can setup this SQL service to be available on ingress IP on port 1433:
Ingress controller uses the flags --tcp-services-configmap and --udp-services-configmap to point to an existing config map where the key is the external port to use and the value indicates the service to expose using the format: <namespace/service name>:<service port>:[PROXY]:[PROXY]
To setup it you can follow steps provided in official NGINX Ingress documentation but there are also some more detailed instructions on StackOverflow, for example this one.
I've followed this reference to deploy my simple react application into Kubernetes.
https://medium.com/bb-tutorials-and-thoughts/aws-deploying-react-app-with-nodejs-backend-on-eks-e5663cb5017f
But after deploying, I can't see my application in the browser.
So I tried to set external ip address using this command line
kubectl patch svc XXX -p '{"spec":{"externalIPs":["10.2.8.19"]}}'
Reference is here
Assign External IP to a Kubernetes Service
But I can't see my application deployed in the browser.
http://10.2.8.192:3000
Here is my deployment.yml file
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: test-app
name: test-app
spec:
replicas: 5
selector:
matchLabels:
app: test-app
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: test-app
spec:
containers:
- image: XXX.dkr.ecr.XXX.amazonaws.com/XXX/XXX:v1
name: test-app
imagePullPolicy: Always
resources: {}
ports:
- containerPort: 3000
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: test-app
labels:
run: test-app
spec:
ports:
- port: 3000
protocol: TCP
selector:
app: test-app
type: NodePort
Please give me any advice. Thank you...
You are mixing 2 ways to expose service outside. You want to use NodePort but you are setting ExternalIP.
Issue root cause
In your setup, you are using NodePort so you need to use ExternalIP of the node with NodePort which is 31300 (more details below).
Setting ExternalIP in NodePort service in this setup is pointless (also 10.2.8.19 is InternalIP which allows you to connect only in cluster).
In your example, you are trying to reach application using your application port number, but you should use service nodePort number, which is 31300.
Note
A Service can map any incoming port to a targetPort. By default and for convenience, the targetPort is set to the same value as the port field.
Exposing application
Generally, you have 3 main ways to expose your application:
NodePort
I don't have access to medium article, but I guess in this tutorial NodePort was used.
In this configuration, you have to use serviceType: NodePort. To connect using nodeport, you have to use ExternalHostIP:NodePort. As you are using cloud environment, your VM should already have ExternalIP.
ExternalHostIP is IP address of node, where application endpoint pod was deployed. To get ExternalIP of Node you can use command.
$ kubectl get node -o wide
To get information on what node, specific pod was deployed you can execute command
`$ kubectl get po -o wide
NodePort number is assigned from range 30000-32767.
IMPORTANT
Please remember to configure fiewall to allow traffic on this specific port, or if it's just for testing you can allow whole range 30000-32767.
Example
Let's say your application pod was deployed on Node with ExternalIP: 35.228.76.198 and your service NodePort is 31300.
If you configured firewall rules correctly and right containerPort (your application must listen on this port) was set, when you will use 35.228.76.198:31300 in the browser, you should reach your application.
LoadBalancer
In this option, service is LoadBalancer type, which means that cloud is creating LB with ExternalIP. You just need to enter this IP in your browser to reach your application. However, please remember that LoadBalancer is extra paid.
Ingress
In this option you have to use some kind of Ingress Controller. Most common is Nginx Ingress Controller. In this option, depends of your needs you can create Ingress as NodePort or LoadBalancer option.
Usefull links
Service
Exposing application in AWS
Nginx Ingress on AWS
Please let me know if you was able to reach your application or you have further questions.
If you want to expose your application with a NodePort you can have a look How do I expose the Kubernetes services running on my Amazon EKS cluster?:
It looks like your Deployment is missing a targetPort.
kubectl get nodes should return the NodeIP.
NodeIP:NodePort should be reachable if you enable the security group of the nodes to allow incoming traffic through port 31300.
I have a problem with a simple react app that was created using npx create-react-app react-app. Once deployed on k8s, I got this:
Uncaught SyntaxError: Unexpected token '<'
However, if I would to kubectl port-forward to the pod and view the app at localhost:3000 (container's pod is at 3000, cluster ip service listening on 3000 and forwarding to 3000) no problem at all.
The ingress routing looks to be fine as I can get to other services to work within the cluster but not to the app. Some help would be greatly appreciated.
Deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: react-app-deployment
# namespace: gitlab-managed-apps
spec:
replicas: 1
selector:
matchLabels:
component: react-app
template:
metadata:
labels:
component: react-app
spec:
imagePullSecrets:
- name: simpleweb-token-namespace
containers:
- name: react-app
image: registry.gitlab.com/mttlong/sample/react-app
env:
- name: "PORT"
value: "3000"
ports:
- containerPort: 3000
Cluster ip service:
apiVersion: v1
kind: Service
metadata:
name: react-app-cluster-ip-service
spec:
type: ClusterIP
selector:
component: react-app
ports:
- port: 3000
targetPort: 3000
Dockerfile:
FROM node:10.15.3-alpine as builder
WORKDIR '/app'
COPY ./package.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx
EXPOSE 3000
COPY ./nginx/default.conf /etc/nginx/conf.d/default.conf
COPY --from=builder /app/build /usr/share/nginx/html
Ingress Service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: orion-ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: horizon.zeezum.com
http:
paths:
- path: /
backend:
serviceName: react-app-cluster-ip-service
servicePort: 3000
- path: /api(/|$)(.*)
backend:
serviceName: simple-api-nodeport-service
servicePort: 3050
I ran into the same issue as you have described. I solved it by splitting up the Ingress for the front-end and the API.
In your case this would look something like this:
Front-end ingress service (without rewrite target):
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: orion-ingress-frontend-service
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: horizon.zeezum.com
http:
paths:
- path: /
backend:
serviceName: react-app-cluster-ip-service
servicePort: 3000
Back-end ingress service (with the /$2 rewrite-target):
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: orion-ingress-backend-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: horizon.zeezum.com
http:
paths:
- path: /api(/|$)(.*)
backend:
serviceName: simple-api-nodeport-service
servicePort: 3050
The rest of you configuration should be good.
Remove nginx.ingress.kubernetes.io/rewrite-target: /$2 this line and it should work
This is most likely the result of a 404 page or a redirect to a page that serves regular html instead of the expected JavaScript files. (A HTML page starts with <html> or a <!DOCTYPE...> therefore the < as unexpected token)
Make sure that you have correctly build the image and you access the page correctly. You can verify by manually accessing the URL with the browser or look into the network tab of your browser development tools to inspect the response.
(I assume that either you have cached the index.html and try to access old assets, check your cache header, or that the paths to not match. In that case inspect your image and access the URLs manually)
I had a similar thing. I got rid of the Ingress altogether, and in my Service yaml file changed my Service to type: Nodeport and added the fields protocol: TCP and nodePort: <insert nodeport> under each port.
I got the values ofthe nodeports by running kubectl expose deployment <insert deployment name> --type=NodePort --name=example-service. Which creates a service and assigns the ports. You can then run kubectl describe services example-service to display all the nodeport values. You can then copy the values to your Service yaml file.
When its all running you would access the website from the browser using the nodeport that was generated, ie localhost:31077for example, and not the actual port 3000.
I then changed any url routes I had to use the nodeport instead of the actual ports and boom.
I am trying to get hands around ingress routing to deploy multiple ReactJS application using one public ip address. Using SpeedyMath app available at https://github.com/pankajladhar/speedy-math.git with below routing file trying to access this app as http://myapps.centralus.cloudapp.azure.com/speedymath displays a white screen. From browser logs what i see is http://myapps.centralus.cloudapp.azure.com/static/js/bundle.js net::ERR_ABORTED 404 (Not Found). I notice index.html getting loaded but errors "Failed to load resource" at line <script type="text/javascript" src="/static/js/bundle.js"></script></body>
ingress-routing.yml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myapps-ingress
annotations:
nginx.org/server-snippet: "proxy_ssl_verify off;"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /speedymath
backend:
serviceName: speedymath-service
servicePort: 80
The same application loads properly when the routing file is updated for path from "/speedymath" to "/". But this does not help me in building different routing rules based on incoming url
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myapps-ingress
annotations:
nginx.org/server-snippet: "proxy_ssl_verify off;"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: speedymath-service
servicePort: 80
Appreciate your help here
My issue got resolved with couple of things:
As #mk_sta stated, path as path: /speedymath(/|$)(.*) and nginx.ingress.kubernetes.io/rewrite-target: /$2
To handle the context issue with ReactJS app, updated package.json to include "homepage": ".". This will update all links and paths to refer from current directory.
I've managed to reproduce the issue in the similar user case with Nginx Ingress controller 0.25.1 version:
$ kubectl exec -it $(kubectl get po -l component=controller|awk '{print $1}'| sed '1d') -- /nginx-ingress-controller
A few concerns that might help you to resolve a problem:
FYI, since 0.22.0 version of Nginx Ingress was announced, some significant changes in Rewrite annotations being propagated that are not compatible with previous configurations, read more here.
If you are using more latest Ingress controller release, then you would probably need to slightly improve the origin Ingress definition, according to the documentation:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myapps-ingress
annotations:
nginx.org/server-snippet: "proxy_ssl_verify off;"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- http:
paths:
- path: /speedymath(/|$)(.*)
backend:
serviceName: speedymath-service
servicePort: 80
Afterwards, you might be able to reach the application endpoint, however I have added Trailing slash in my target URL, instructing Nginx engine to properly serve some static content; looking at your URL:
http://myapps.centralus.cloudapp.azure.com/speedymath/
Following this discussion, you can even redirect all requests to a non trailing location, adjusting appropriate Configuration snippet annotation to the source Ingress resource:
nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite ^(/speedymath)$ $1/ permanent;
The last time I did this my annotation was a bit different. That's the only difference I see between what worked for me.
annotations:
ingress.kubernetes.io/rewrite-target: /
I also had this problem. It's caused because you specified a path in your ingress (in your case speedymath) but the react application will try to access the root path.
This can be solved by adding an environment variable (PUBLIC_URL) assigned with the same path you specified on ingress (would be \speedymath\ in your case) in your deployment (or pod) yaml like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
env:
- name: PUBLIC_URL
value: "/<MY_INGRESS_PATH>/"
image: myapp/image:latest
ports:
- containerPort: 80
Hope this can help others!
Not able to resolve an API hosted as a ClusterIP service on Minikube when calling from the React JS frontend.
The basic architecture of my application is as follows
React --> .NET core API
Both these components are hosted as ClusterIP services. I have created an ingress service with http paths pointing to React component and the .NET core API.
However when I try calling it from the browser, react application renders, but the call to the API fails with
net::ERR_NAME_NOT_RESOLVED
Below are the .yml files for
1. React application
apiVersion: v1
kind: Service
metadata:
name: frontend-clusterip
spec:
type: ClusterIP
ports:
- port: 59000
targetPort: 3000
selector:
app: frontend
2. .NET core API
apiVersion: v1
kind: Service
metadata:
name: backend-svc-nodeport
spec:
type: ClusterIP
selector:
app: backend-svc
ports:
- port: 5901
targetPort: 59001
3. ingress service
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /?(.*)
backend:
serviceName: frontend-clusterip
servicePort: 59000
- path: /api/?(.*)
backend:
serviceName: backend-svc-nodeport
servicePort: 5901
4. frontend deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
selector:
matchLabels:
app: frontend
replicas: 1
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: upendra409/tasks_tasks.frontend
ports:
- containerPort: 3000
env:
- name: "REACT_APP_ENVIRONMENT"
value: "Kubernetes"
- name: "REACT_APP_BACKEND"
value: "http://backend-svc-nodeport"
- name: "REACT_APP_BACKENDPORT"
value: "5901"
This is the error I get in the browser:
xhr.js:166 GET
http://backend-svc-nodeport:5901/api/tasks net::ERR_NAME_NOT_RESOLVED
I installed curl in the frontend container to get in the frontend pod to try to connect the backend API using the above URL, but the command doesn't work
C:\test\tasks [develop ≡ +1 ~6 -0 !]> kubectl exec -it frontend-655776bc6d-nlj7z --curl http://backend-svc-nodeport:5901/api/tasks
Error: unknown flag: --curl
You are getting this error from local machine because ClusterIP service is wrong type for accessing from outside of the cluster. As mentioned in kubernetes documentation ClusterIP is only reachable from within the cluster.
Publishing Services (ServiceTypes)
For some parts of your application (for example, frontends) you may
want to expose a Service onto an external IP address, that’s outside
of your cluster.
Kubernetes ServiceTypes allow you to specify what kind of Service
you want. The default is ClusterIP.
Type values and their behaviors are:
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the
cluster. This is the default ServiceType.
NodePort:
Exposes the Service on each Node’s IP at a static port (the
NodePort). A ClusterIP Service, to which the NodePort
Service routes, is automatically created. You’ll be able to contact
the NodePort Service, from outside the cluster, by requesting
<NodeIP>:<NodePort>.
LoadBalancer:
Exposes the Service externally using a cloud provider’s load balancer.
NodePort and ClusterIP Services, to which the external load
balancer routes, are automatically created.
ExternalName:
Maps the Service to the contents of the externalName field (e.g.
foo.bar.example.com), by returning a CNAME record
with its value. No proxying of any kind is set up.
Note: You need CoreDNS version 1.7 or higher to use the ExternalName type.
I suggest using NodePort or LoadBalancer service type instead.
Refer to above documentation links for examples.