Unable to access react js simple app with istio on k8s desktop - reactjs

Reactjs is simple default application, trying to access through istio but it is unable to access.
Below is the code for deployment used.
docker file
dockerfile
FROM node:latest
WORKDIR /app
COPY package.json ./
RUN npm install
COPY . .
CMD ["npm", "start"]
kubernetes deployment file v1 and v2
Deployment.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: my-react-app-v1
spec:
replicas: 2
selector:
matchLabels:
app: my-react-app
version: v1
template:
metadata:
labels:
app: my-react-app
version: v1
spec:
containers:
- name: my-react-app
image: xxx/react_sample:v1
imagePullPolicy: Always
ports:
- containerPort: 3000
restartPolicy: Always
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: my-react-app-v2
spec:
replicas: 2
selector:
matchLabels:
app: my-react-app
version: v2
template:
metadata:
labels:
app: my-react-app
version: v2
spec:
containers:
- name: my-react-app
image: xxx/react_sample:v1
imagePullPolicy: Always
ports:
- containerPort: 3000
restartPolicy: Always
---
##########################
#Service
##############
kind: Service
apiVersion: v1
metadata:
name: my-react-app
spec:
ports:
- port: 3000
name: http
selector:
app: my-react-app
Istio gateway configuration, Virtual service configuration and destination rules.
Istio Configuration
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: appinfo-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: appinfo
spec:
hosts:
- "*"
gateways:
- appinfo-gateway
http:
- route:
- destination:
host: my-react-app
subset: v1
weight: 50
- destination:
host: my-react-app
subset: v2
weight: 50
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: my-react-app
spec:
host: my-react-app
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2

Related

Ingress doen't serve Django's static files

I have a simple app architecture:
A webapp build on Django Rest Framework providing an API
A Nginx reverse proxy used to serve webapp and its static files
A redis container used by webapp
A React app used as frontend
I tried to deploy this using Kubernetes in local first before deploying on the cloud (AWS or something similar).
Here my global deployment file (I intend to split in smaller part after).
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
app: redis
spec:
ports:
- port: 6379
protocol: TCP
targetPort: 6379
selector:
app: redis
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
spec:
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: webapp
labels:
app: webapp
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
name: http
- port: 8081
protocol: TCP
targetPort: 8081
name: daphne
selector:
app: webapp
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
spec:
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: registry.gitlab.com/anima879/celestorbe/backend
imagePullPolicy: Always
volumeMounts:
- name: static-data
mountPath: /vol/web
env:
- name: SECRET_KEY
value: secretkey1234
- name: ALLOWED_HOSTS
value: "127.0.0.1,localhost,proxy"
ports:
- containerPort: 8080
- containerPort: 8081
imagePullSecrets:
- name: gitlab-registry-secret
volumes:
- name: static-data
hostPath:
path: static-data
---
apiVersion: v1
kind: Service
metadata:
name: front
labels:
app: front
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: front
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: front
spec:
selector:
matchLabels:
app: front
template:
metadata:
labels:
app: front
spec:
containers:
- name: front
image: registry.gitlab.com/anima879/celestorbe/front
imagePullPolicy: Always
volumeMounts:
- name: static-data
mountPath: /vol/web
ports:
- containerPort: 80
env:
- name: REACT_APP_API_URL
value: webapp
imagePullSecrets:
- name: gitlab-registry-secret
volumes:
- name: static-data
hostPath:
path: static-data
---
apiVersion: v1
kind: Service
metadata:
name: proxy
labels:
app: proxy
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: proxy
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: proxy
spec:
replicas: 1
selector:
matchLabels:
app: proxy
template:
metadata:
labels:
app: proxy
spec:
containers:
- name: proxy
image: registry.gitlab.com/anima879/celestorbe/proxy
imagePullPolicy: Always
volumeMounts:
- name: static-data
mountPath: /vol/web
ports:
- containerPort: 80
imagePullSecrets:
- name: gitlab-registry-secret
volumes:
- name: static-data
hostPath:
path: static-data
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: main-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/configuration-snippet: |-
rewrite ^/api/(.*)$ /$1 break;
rewrite_log on;
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: front
port:
number: 80
- path: /api(/|$)(.*)
pathType: Prefix
backend:
service:
name: proxy
port:
number: 80
- path: /ws
pathType: Prefix
backend:
service:
name: proxy
port:
number: 80
As you can see, I use an Ingress as the entrance for my application. Every url should go to the frontend by default (so the user have the UI to use the app). But when a request is made to the back, it starts with /api.
For instance:
localhost/login should display the login page from the React app
localhost/api/login should send a login request to the back (with the POST method)
I need to rewrite the url to the back to localhost/login because webapp doesn't understand URL starting with /api.
But if I go in the browser to an API endpoint, Django display a page that allows to use the endpoint.
My problem is here. When I tried to directly access the API from the browser, the CSS are not loaded:
The type of the CSS file are not correct, and I don't know how to solve this.
Also I suspect the Ingress tried to get the CSS from the frontend instead of Django.
I know it is a tricky issue, but if you have a better alternative or some workflow to solve the issue, i would appreciate.
Thank you.
P.S. When I don't use an Ingress and use a LoadBalancer service (On different port for Front and proxy), it works. Except the front can't do request to the back if I don't the IP of the proxy LoadBalancer. Because I want very low coupling, I don't think it is a good idea (But i may be wrong).

Problem when making frontend and backend communicate

i'm working with Minikube to make a full stack K8s application using React as a frontend and ASP NET Core as a backend.
Here there are my configuration
Deployments and Services
apiVersion: v1
kind: ServiceAccount
metadata:
name: web-frontend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-deployment
labels:
app: frontend
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app:
frontend
spec:
serviceAccountName: web-frontend
containers:
- name: frontend
image: frontend
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: frontend-svc
spec:
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: backend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-deployment
labels:
app: backend
spec:
replicas: 1
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app:
backend
spec:
serviceAccountName: backend
containers:
- name: backend
image: backend
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
selector:
app: backend
ports:
- protocol: TCP
port: 5000
targetPort: 5000
Dockerfiles for the frontend
FROM node:alpine as build-image
WORKDIR /app
COPY package.json ./
COPY package-lock.json ./
RUN npm i
COPY . .
CMD ["npm", "run", "start"]
This is instead my Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: frontend-backend-ingress
annotations:
# nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /?(.*)
pathType: Prefix
backend:
service:
name: frontend-svc
port:
number: 80
- path: /api/?(.*)
pathType: Prefix
backend:
service:
name: backend
port:
number: 5000
However, when I type minikube tunnel to expose the ingress IP locally I can reach the frontend, but when the frontend tries to get a fetch request to /api/something in the browser console I get GET http://localhost/api/patients/ 404 (Not Found) and an error SyntaxError: Unexpected token < in JSON at position 0.
Moreover, If I change the Ingress in this way
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: frontend-backend-ingress
annotations:
# nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-svc
port:
number: 80
- path: /api/
pathType: Prefix
backend:
service:
name: backend
port:
number: 5000
Then I can issue curl localhost/api/something and I get the JSON result, but when the frontend tries to contact the backend I get
GET http://localhost/api/patients/ 500 (Internal Server Error)
SyntaxError: Unexpected end of JSON input
at main.358f50ad.js:2:473736
at s (main.358f50ad.js:2:298197)
at Generator._invoke (main.358f50ad.js:2:297985)
at Generator.next (main.358f50ad.js:2:298626)
at Al (main.358f50ad.js:2:439869)
at a (main.358f50ad.js:2:440073)
This looks strange because if I try the frontend and the backend outside kubernetes everything works fine and from the React application the result from the backend is correctly fetched (of course using the proxy inside the package.json)
To contact or make links between apps you could use their kubernetes native FQDN ( try to ping or telnet it if you want to test the connection but here is how it works:
Thr default FQDN of any service is:
<service-name>.<namespace>.svc.cluster.local.
In your above example, you should be able to contact you backend service from your frontend one with:
backend.YOURNAMESPACENAME.svc.cluster.local:5000
For services in same namespace, you don't need to use the FQDN to access services, just the service name would be enough:
backend:5000
I don't know where you exactly configure the links between the frontend and backend but however, you should "variabilize" this link and add the variable definition in kubernetes manifest.

Reactjs application image not working in kubernetes

I am hosting my reactjs-redux application to kubernetes via GitHub action. The pipeline is successful but after the deployment I am only seeing the below Nginx screen. I feel the issue is with the kubernetes. Can someone please help me on this
My Kubernetes manifest is given below
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-dev
namespace: myapp
spec:
replicas: 3
revisionHistoryLimit: 10
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: artifactory.com:2195/myapp:latest
resources:
limits:
cpu: "3"
memory: 1Gi
requests:
cpu: 300m
memory: 128Mi
ports:
- containerPort: 80
imagePullSecrets:
- name: regsecret
selector:
matchLabels:
app: myapp
My Github action step is as given below
publish:
name: Upload to Artifactory
needs:
- build
runs-on: self-hosted
container:
image: artifactory.com:2005/ubuntu-docker-kubectl:1.0
steps:
- name: Checkouting project
uses: actions/checkout#v2
- name: Login to On-Prem Registry
uses: actions/login-action#v1
with:
registry: artifactory.com:2195/artifactory
username: ${{ secrets.ARTIFACTORY_USERNAME }}
password: ${{ secrets.ARTIFACTORY_PASSWORD }}
- name: Build and push image to Artifactory
uses: actions/build-push-action#v2
with:
file: 'Dockerfile'
push: true
tags: "artifactory.com:2195/myapp:latest"
service.yml
apiVersion: v1
kind: Service
metadata:
name: myapp-dev
namespace: myapp
labels:
app: myapp
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: myapp
ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-dev
namespace: myapp
spec:
rules:
- host: dev-myapp.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-dev
port:
number: 80
Update 1
As per the discussion with #Hans Kilian the docker image is fine, we are able to run successfully in the localhost. So the issue is with the Kubernetes deployment.
Can someone please help me on this
Try to use this in your dockerfile:
WORKDIR /usr/share/nginx/html
COPY /usr/src/app/build/ .

Getting env variables to an NGNIX react kubernetes pod

I have an ngnix kubernetes pod that I need to pass an .env file to but I can't get it to work.
Docker file for the pod:
FROM node:12-alpine as build-step
RUN mkdir /app
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
RUN npm run build
FROM nginx:1.17.1-alpine
COPY --from=build-step /app/build /usr/share/nginx/html
I've tried with to pass the env with the configmap:
apiVersion: v1
kind: ConfigMap
metadata:
name: test-front-app
data:
ENV_TEST: "TEST"
And with passing the env in the Deployment but neither worked.
edit the deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-front-app
name: test-front-app
spec:
replicas: 2
selector:
matchLabels:
name: test-front-app
template:
metadata:
labels:
name: test-front-app
spec:
imagePullSecrets:
- name: gcr-json-key
containers:
- name: front-test
image: gcr.io/PROJECT_ID/IMAGE:TAG
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
env:
- name: TEST_ONE
value: "test-value-one/"
envFrom:
- configMapRef:
name: test-front-app
Your ConfigMap test-front-app key can be accessed as env like below.
env:
- name: TEST_ONE
valueFrom:
configMapKeyRef:
name: test-front-app
key: ENV_TEST
In this way TEST_ONE variable with value of TEST will be passed to deployment.

How to access Kubernetes container environment variables from Next.js application?

In my next.config.js, I have a part that looks like this:
module.exports = {
serverRuntimeConfig: { // Will only be available on the server side
mySecret: 'secret'
},
publicRuntimeConfig: { // Will be available on both server and client
PORT: process.env.PORT,
GOOGLE_CLIENT_ID: process.env.GOOGLE_CLIENT_ID,
BACKEND_URL: process.env.BACKEND_URL
}
I have a .env file and when run locally, the Next.js application succesfully fetches the environment variables from the .env file.
I refer to the env variables like this for example:
axios.get(publicRuntimeConfig.BACKOFFICE_BACKEND_URL)
However, when I have this application deployed onto my Kubernetes cluster, the environment variables set in the deploy file are not being collected. So they return as undefined.
I read that .env files cannot be read due to the differences between frontend (browser based) and backend (Node based), but there must be some way to make this work.
Does anyone know how to use environment variables saved in your pods/containers deploy file on your frontend (browser based) application?
Thanks.
EDIT 1:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "38"
creationTimestamp: xx
generation: 40
labels:
app: appname
name: appname
namespace: development
resourceVersion: xx
selfLink: /apis/extensions/v1beta1/namespaces/development/deployments/appname
uid: xxx
spec:
progressDeadlineSeconds: xx
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: appname
tier: sometier
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: appname
tier: sometier
spec:
containers:
- env:
- name: NODE_ENV
value: development
- name: PORT
value: "3000"
- name: SOME_VAR
value: xxx
- name: SOME_VAR
value: xxxx
image: someimage
imagePullPolicy: Always
name: appname
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 3000
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
requests:
cpu: 100m
memory: 100Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: xxx
lastUpdateTime: xxxx
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 40
readyReplicas: 1
replicas: 1
updatedReplicas: 1
You can create a config-map and then mount it as a file in your deployment with your custom environment variables.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "38"
creationTimestamp: xx
generation: 40
labels:
app: appname
name: appname
namespace: development
resourceVersion: xx
selfLink: /apis/extensions/v1beta1/namespaces/development/deployments/appname
uid: xxx
spec:
progressDeadlineSeconds: xx
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: appname
tier: sometier
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: appname
tier: sometier
spec:
containers:
- env:
- name: NODE_ENV
value: development
- name: PORT
value: "3000"
- name: SOME_VAR
value: xxx
- name: SOME_VAR
value: xxxx
volumeMounts:
- name: environment-variables
mountPath: "your/path/to/store/the/file"
readOnly: true
image: someimage
imagePullPolicy: Always
name: appname
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 3000
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
requests:
cpu: 100m
memory: 100Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumes:
- name: environment-variables
configMap:
name: environment-variables
items:
- key: .env
path: .env
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: xxx
lastUpdateTime: xxxx
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 40
readyReplicas: 1
replicas: 1
updatedReplicas: 1
I added the following configuration in your deployment file:
volumeMounts:
- name: environment-variables
mountPath: "your/path/to/store/the/file"
readOnly: true
volumes:
- name: environment-variables
configMap:
name: environment-variables
items:
- key: .env
path: .env
You can then create a config map with key ".env" with your environment variables on kubernetes.
Configmap like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: environment-variables
namespace: your-namespace
data:
.env: |
variable1: value1
variable2: value2

Resources