kubectl secrets not accessible in React app Pods - reactjs

I've tried to avoid, but really haven't had the need, to set specific env vars for a React FE. But I'm working on social authentication, with Azure AD specifically, and now I do have a use case for it.
I acknowledge the AAD_TENANT_ID and AAD_CLIENT_ID aren't exactly "secret" or sensitive information and will be compiled in the JS, but I'm trying to do this for a few reasons:
I can more easily manage dev and prod keys from a Key Vault...
Having environment independent code (i.e., process.env.AAD_TENANT_ID will work whether it is dev or prod).
But it doesn't work.
The issue I'm running into is that the env vars are not accessible at process.env despite having the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: admin-v2-deployment-dev
namespace: dev
spec:
replicas: 1
revisionHistoryLimit: 5
selector:
matchLabels:
component: admin-v2
template:
metadata:
labels:
component: admin-v2
spec:
containers:
- name: admin-v2
image: admin-v2
ports:
- containerPort: 4001
env:
- name: AAD_CLIENT_ID
valueFrom:
secretKeyRef:
name: app-dev-secrets
key: AAD_CLIENT_ID
- name: AAD_TENANT_ID
valueFrom:
secretKeyRef:
name: app-dev-secrets
key: AAD_TENANT_ID
---
apiVersion: v1
kind: Service
metadata:
name: admin-v2-cluster-ip-service-dev
namespace: dev
spec:
type: ClusterIP
selector:
component: admin-v2
ports:
- port: 4001
targetPort: 4001
When I do the following anywhere in the code, it comes back undefined:
console.log(process.env.AAD_CLIENT_ID);
console.log(process.env.AAD_TENANT_ID);
The values are definitely there when I check secrets in the namespace and in the Pod itself:
Environment:
AAD_CLIENT_ID: <set to the key 'AAD_CLIENT_ID' in secret 'app-dev-secrets'> Optional: false
AAD_TENANT_ID: <set to the key 'AAD_TENANT_ID' in secret 'app-dev-secrets'> Optional: false
So how should one go about getting kubectl secrets into React Pods?

I am guessing you are using create-react-app app for React FE. You have to make sure that your environment variables starts with REACT_APP_ else it will be ignored inside app.
According to create-react-app documentation
Note: You must create custom environment variables beginning with REACT_APP_.
Any other variables except NODE_ENV will be ignored to avoid accidentally
exposing a private key on the machine that could have the same name.
Source - https://create-react-app.dev/docs/adding-custom-environment-variables/

Related

How to override env variables in React JS application with Kuberenetes?

I've a simle React JS application and it's using a environment variable(REACT_APP_BACKEND_SERVER_URL) defined in .env file. Now I'm trying to deploy this application to minikube using Kubernetes.
This is my deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-ui
spec:
replicas: 1
selector:
matchLabels:
app: test-ui
template:
metadata:
name: test-ui-pod
labels:
app: test-ui
spec:
containers:
- name: test-ui
image: test-ui:1.0.2
ports:
- containerPort: 80
env:
- name: "REACT_APP_BACKEND_SERVER_URL"
value: "http://127.0.0.1:59058"
When I run the application, it's working but REACT_APP_BACKEND_SERVER_URL is giving the value which I defined in .env file. Not the one I'm overriding. Can someone help me with this please? How to override the env variable using Kubernetes deployment?
After starting the app with your deployment YAML and checking for the environment variables I see the environment variables for that environment variable.
REACT_APP_BACKEND_SERVER_URL=http://127.0.0.1:59058
you can check that by doing an kubectl exec -it <pod-name> -- sh and running env command.
So you can see that REACT_APP_BACKEND_SERVER_URL is there in the environment variables. It's available for your application to use. I suspect that you may need to understand better from the React app side on how to use the .env file.

How to access Kubernetes container environment variables from React.js application?

I have a create-react-app with default configurations.
I have some PORT and APIs inside .env file configured with
REACT_APP_PORT=3000
and using inside app with process.env.REACT_APP_PORT.
I have my server deployed on Kubernetes.
Can someone explain how to configure my create-react-app, to use environment variables provided by pods/containers?
I want to access cluster IP via Name given by kubectl svc
Update 1 :
I have the opposite scenario, I don't want my frontend env variables to be configured in kubernetes pod container, but want to use the pod's env variable
e.x CLUSTER_IP and CLUSTER_PORT with their name defined by pod's env variable inside my react app.
For eg.-
NAME TYPE CLUSTER-IP
XYZ ClusterIP x.y.z.a
and want to access XYZ in react app to point to the Cluster IP (x.y.z.a)
Use Pod fields as values for environment variables
apiVersion: v1
kind: Pod
metadata:
name: dapi-envars-fieldref
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv MY_NODE_NAME MY_POD_NAME MY_POD_NAMESPACE;
printenv MY_POD_IP MY_POD_SERVICE_ACCOUNT;
sleep 10;
done;
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
restartPolicy: Never
https://kubernetes.io/docs/tasks/inject-data-application/_print/
Maybe above example will help you.
try this:
kubectl create configmap react-config --from-literal=REACT_APP_PORT=3000
and then:
spec:
containers:
- name: create-react-app
image: gcr.io/google-samples/node-hello:1.0
envFrom:
- configMapRef:
name: react-config
Now you configured your env from "outside" the pod
See also the documentation: https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-container-environment-variables
Try Following
spec:
containers:
- name: create-react-app
image: gcr.io/google-samples/node-hello:1.0
env:
- name: REACT_APP_PORT
value: "3000"

Frontend can't resolve backend name within k8s cluster

I'm trying to deploy a simple Angular/Express app on GKE and the http requests from the frontend can't find the name of the express app.
Here's an example of one get requests. I changed the request from 'localhost' to 'express' because that is the name of the clusterIP service setup in the cluster. Also, I'm able to curl this url from the angular pod and get json returned as expected.
getPups(){
this.http.get<{message:string, pups: any}>("http://express:3000/pups")
.pipe(map((pupData)=>{
return pupData.pups.map(pup=>{
return{
name: pup.name,
breed: pup.breed,
quote: pup.quote,
id: pup._id,
imagePath: pup.imagePath,
rates: pup.rates
}
});
}))
.subscribe((transformedPups)=>{
this.pups = transformedPups
this.pupsUpdated.next([...this.pups])
});
}
Here's the angular deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: puprate-deployment
spec:
replicas: 1
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
containers:
- name: puprate
image: brandonjones085/puprate
ports:
- containerPort: 4200
---
apiVersion: v1
kind: Service
metadata:
name: puprate-cluster-ip-service
spec:
type: ClusterIP
selector:
component: web
ports:
- port: 4200
targetPort: 4200
And the express deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: express
spec:
replicas: 3
selector:
matchLabels:
component: server
template:
metadata:
labels:
component: server
spec:
containers:
- name: server
image: brandonjones085/puprate-express
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: express
spec:
type: ClusterIP
selector:
component: server
ports:
- port: 3000
targetPort: 3000
Your frontend app is making the call from outside your cluster, and therefor needs a way to reach it. Because you are serving http, the best way to set that up will be with an ingress.
First, make sure you have an ingress controller set up in your cluster ( e.g. nginx ingress controller) https://kubernetes.github.io/ingress-nginx/deploy/#gce-gke
Then, set up your express with a service (from your question, I see you already have that set up on port 3000, that's good, though in the service I recommend to change the port to 80 - though not critical)
With that, set up your ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: express
spec:
rules:
- host: <a domain you own>
http:
paths:
# NOTICE!! have you express app listen for that path, or set up nginx rewrite rules (I recommend the former, it's much easier to understand)
- path: /api
backend:
serviceName: express
servicePort: 3000 # or 80 if you decide to change that
Do the same for your web deployment, so you can serve your frontend directly:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: web
spec:
rules:
- host: <a domain you own>
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 4200 # or 80 if you decide to change that
Notice that both ingresses are using the same host but different paths - that's important for what's coming next
in your angular app, change that:
this.http.get<{message:string, pups: any}>("http://express:3000/pups")
to that:
this.http.get<{message:string, pups: any}>("/api/pups")
Browsers will parse that to <domain in your address bar>/api/pups
Since you are using GKE, when you set up the ingress controller you will generate a load balancer in the google cloud - make sure that has a DNS entry that directs there.
I'm assuming you already own a domain, but if you don't yet, just add the ip you got to your personal hosts file until you get one like so:
<ip of load balancer> <domain you want>
# for example
45.210.10.15 awesome-domain.com
So now, use the browser to go to the domain you own - you should get the frontend served to you - and since you are calling your api with an address that starts with /, your api call will go to the same host, and redirected by your ingress to your express app this time, instead of the frontend server.
Angular is running in your browser, not in the pod inside the cluster.
The requests will originate therefore externally and the URL must point to the Ingress or LoadBalancer of your backend service.

How to connect front to back in k8s cluster internal (connection refused)

Error while trying to connect React frontend web to nodejs express api server into kubernetes cluster.
Can navigate in browser to http:localhost:3000 and web site is ok.
But can't navigate to http:localhost:3008 as expected (should not be exposed)
My goal is to pass REACT_APP_API_URL environment variable to frontend in order to set axios baseURL and be able to establish communication between front and it's api server.
deploy-front.yml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: gbpd-front
spec:
selector:
matchLabels:
app: gbpd-api
tier: frontend
track: stable
replicas: 2
template:
metadata:
labels:
app: gbpd-api
tier: frontend
track: stable
spec:
containers:
- name: react
image: binomio/gbpd-front:k8s-3
ports:
- containerPort: 3000
resources:
limits:
memory: "150Mi"
requests:
memory: "100Mi"
imagePullPolicy: Always
service-front.yaml
apiVersion: v1
kind: Service
metadata:
name: gbpd-front
spec:
selector:
app: gbpd-api
tier: frontend
ports:
- protocol: "TCP"
port: 3000
targetPort: 3000
type: LoadBalancer
Deploy-back.yaml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: gbpd-api
spec:
selector:
matchLabels:
app: gbpd-api
tier: backend
track: stable
replicas: 3 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: gbpd-api
tier: backend
track: stable
spec:
containers:
- name: gbpd-api
image: binomio/gbpd-back:dev
ports:
- name: http
containerPort: 3008
service-back.yaml
apiVersion: v1
kind: Service
metadata:
name: gbpd-api
spec:
selector:
app: gbpd-api
tier: backend
ports:
- protocol: "TCP"
port: 3008
targetPort: http
I tried many combinations, also tried adding "LoadBalancer" to backservice but nothing...
I can connect perfecto to localhost:3000 and use frontend but frontend can't connect to backend service.
Question 1: What's is the ip/name to use in order to pass REACT_APP_API_URL to fronten correctly?
Question 2: Why is curl localhost:3008 not answering?
After 2 days trying almost everything in k8s official docs... can't figure out what's happening here, so any help will be much appreciated.
kubectl describe svc gbpd-api
Response:
kubectl describe svc gbpd-api
Name: gbpd-api
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"gbpd-api","namespace":"default"},"spec":{"ports":[{"port":3008,"p...
Selector: app=gbpd-api,tier=backend
Type: LoadBalancer
IP: 10.107.145.227
LoadBalancer Ingress: localhost
Port: <unset> 3008/TCP
TargetPort: http/TCP
NodePort: <unset> 31464/TCP
Endpoints: 10.1.1.48:3008,10.1.1.49:3008,10.1.1.50:3008
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I tested your environment, and it worked when using a Nginx image, let's review the environment:
The front-deployment is correctly described.
The front-service exposes it as loadbalancer, meaning your frontend is accessible from outside, perfect.
The back deployment is also correctly described.
The backend-service stays with as ClusterIP in order to be only accessible from inside the cluster, great.
Below I'll demonstrate the communication between front-end and back end.
I'm using the same yamls you provided, just changed the image to Nginx for example purposes, and since it's a http server I'm changing containerport to 80.
Question 1: What's is the ip/name to use in order to pass REACT_APP_API_URL to fronten correctly?
I added the ENV variable to the front deploy as requested, and I'll use it to demonstrate also. You must use the service name to curl, I used the short version because we are working in the same namespace. you can also use the full name: http://gbpd-api.default.svc.cluster.local:3008
Reproduction:
Create the yamls and applied them:
$ cat deploy-front.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: gbpd-front
spec:
selector:
matchLabels:
app: gbpd-api
tier: frontend
track: stable
replicas: 2
template:
metadata:
labels:
app: gbpd-api
tier: frontend
track: stable
spec:
containers:
- name: react
image: nginx
env:
- name: REACT_APP_API_URL
value: http://gbpd-api:3008
ports:
- containerPort: 80
resources:
limits:
memory: "150Mi"
requests:
memory: "100Mi"
imagePullPolicy: Always
$ cat service-front.yaml
cat: cat: No such file or directory
apiVersion: v1
kind: Service
metadata:
name: gbpd-front
spec:
selector:
app: gbpd-api
tier: frontend
ports:
- protocol: "TCP"
port: 3000
targetPort: 80
type: LoadBalancer
$ cat deploy-back.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: gbpd-api
spec:
selector:
matchLabels:
app: gbpd-api
tier: backend
track: stable
replicas: 3
template:
metadata:
labels:
app: gbpd-api
tier: backend
track: stable
spec:
containers:
- name: gbpd-api
image: nginx
ports:
- name: http
containerPort: 80
$ cat service-back.yaml
apiVersion: v1
kind: Service
metadata:
name: gbpd-api
spec:
selector:
app: gbpd-api
tier: backend
ports:
- protocol: "TCP"
port: 3008
targetPort: http
$ kubectl apply -f deploy-front.yaml
deployment.apps/gbpd-front created
$ kubectl apply -f service-front.yaml
service/gbpd-front created
$ kubectl apply -f deploy-back.yaml
deployment.apps/gbpd-api created
$ kubectl apply -f service-back.yaml
service/gbpd-api created
Remember, in Kubernetes the communication is designed to be made between services, because the pods are always recreated when there is a change in the deployment or when the pod fail.
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/gbpd-api-dc5b4b74b-kktb9 1/1 Running 0 41m
pod/gbpd-api-dc5b4b74b-mzpbg 1/1 Running 0 41m
pod/gbpd-api-dc5b4b74b-t6qxh 1/1 Running 0 41m
pod/gbpd-front-66b48f8b7c-4zstv 1/1 Running 0 30m
pod/gbpd-front-66b48f8b7c-h58ds 1/1 Running 0 31m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/gbpd-api ClusterIP 10.0.10.166 <none> 3008/TCP 40m
service/gbpd-front LoadBalancer 10.0.11.78 35.223.4.218 3000:32411/TCP 42m
The pods are the workers, and since they are replaceable by nature, we will connect to a frontend pod to simulate his behaviour and try to connect to the backend service (which is the network layer that will direct the traffic to one of the backend pods).
The nginx image does not come with curl preinstalled, so I will have to install it for demonstration purposes:
$ kubectl exec -it pod/gbpd-front-66b48f8b7c-4zstv -- /bin/bash
root#gbpd-front-66b48f8b7c-4zstv:/# apt update && apt install curl -y
done.
root#gbpd-front-66b48f8b7c-4zstv:/# curl gbpd-api:3008
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
Now let's try using the environment variable that was defined:
root#gbpd-front-66b48f8b7c-4zstv:/# printenv | grep REACT
REACT_APP_API_URL=http://gbpd-api:3008
root#gbpd-front-66b48f8b7c-4zstv:/# curl $REACT_APP_API_URL
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
Considerations:
Question 2: Why is curl localhost:3008 not answering?
Since all yamls are correctly described you must check if image: binomio/gbpd-back:dev is correctly serving on port 3008 as intended.
Since it's not a public image, I can't test it, so I'll give you troubleshooting steps:
just like we logged inside the front-end pod you will have to log into this backend-pod and test curl localhost:3008.
If it's based on a linux distro with apt-get, you can run the commands just like I did on my demo:
get the pod name from backend deploy (example: gbpd-api-6676c7695c-6bs5n)
run kubectl exec -it pod/<POD_NAME> -- /bin/bash
then run apt update && apt install curl -y
and test curl localhost:3008
if no answer run `apt update && apt install net-tools
and test netstat -nlpt, it will have to show you the output of the services running and the respective port, example:
root#gbpd-api-585df9cb4d-xr6nk:/# netstat -nlpt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1/nginx: master pro
If the pod does not return nothing even on this approach, you will have to check the code in the image.
Let me know if you need help after that!

Unable to connect to api server of react js app using kubernetes?

I am basically trying to run a react js app which is mainly composed of 3 services namely postgres db, API server and UI frontend(served using nginx).Currently the app works as expected in the development mode using docker-compose but when i tried to run this in the production using kubernetes,I was not able to access the api server of the app(CONNECTION REFUSED).
Since I want to run in this in production using kubernetes, I created yaml files for each of the services and then tried running them using kubectl apply.I have also tried this with and without using the persistent volume for the api server.But none of this helped.
Docker-compose file(This works and i am able to connect to api server at port 8000)
version: "3"
services:
pg_db:
image: postgres
networks:
- wootzinternal
ports:
- 5432
environment:
- POSTGRES_PASSWORD=password
- POSTGRES_USER=postgres
- POSTGRES_DB=wootz
volumes:
- wootz-db:/var/lib/postgresql/data
apiserver:
image: wootz-backend
volumes:
- ./api:/usr/src/app
- /usr/src/app/node_modules
build:
context: ./api
dockerfile: Dockerfile
networks:
- wootzinternal
depends_on:
- pg_db
ports:
- '8000:8000'
ui:
image: wootz-frontend
volumes:
- ./client:/usr/src/app
- /usr/src/app/build
- /usr/src/app/node_modules
build:
context: ./client
dockerfile: Dockerfile
networks:
- wootzinternal
ports:
- '80:3000'
volumes:
wootz-db:
networks:
wootzinternal:
driver: bridge
My api server yaml for running in kubernetes(This doesn't work and I cant connect to the api server at port 8000)
apiVersion: v1
kind: Service
metadata:
name: apiserver
labels:
app: apiserver
spec:
ports:
- name: apiport
port: 8000
targetPort: 8000
selector:
app: apiserver
tier: backend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: apiserver
labels:
app: apiserver
spec:
selector:
matchLabels:
app: apiserver
tier: backend
strategy:
type: Recreate
template:
metadata:
labels:
app: apiserver
tier: backend
spec:
containers:
- image: suji165475/devops-sample:corspackedapi
name: apiserver
env:
- name: POSTGRES_DB_USER
value: postgres
- name: POSTGRES_DB_PASSWORD
value: password
- name: POSTGRES_DB_HOST
value: postgres
- name: POSTGRES_DB_PORT
value: "5432"
ports:
- containerPort: 8000
name: myport
What changes should I make to my api server yaml(kubernetes). so that I can access it on port 8000. Currently I am getting a connection refused error.
The default service on Kubernetes is ClusterIP that is used to have service inside the cluster but not having that exposed to outside.
That is your service using the LoadBalancer type:
apiVersion: v1
kind: Service
metadata:
name: apiserver
labels:
app: apiserver
spec:
type: LoadBalancer
ports:
- name: apiport
port: 8000
targetPort: 8000
selector:
app: apiserver
tier: backend
With that, you can see how the service expects to have an external IP address by running kubectl describe service apiserver
In case you want to have more control of how your requests are routed to that service you can add an Ingress in front of that same service.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: apiserver
name: apiserver
spec:
rules:
- host: apiserver.example.com
http:
paths:
- backend:
serviceName: apiserver
servicePort: 8000
path: /*
Your service in only exposed over the internal kubernetes network.
This is because if you do not specify a spec.serviceType, the default is ClusterIP.
To expose your application you can follow at least 3 ways:
LoadBalancer: you can specify a spec.serviceType: LoadBalancer. A Load Balancer service expose your application on the (public) network. This work great if your cluster is a cloud service (gke, digital ocean, aks, azure, ...), the cloud will take care of providing you the public ip and routing the network traffic to all your nodes. Usually this is not the best method because a cloud Load balancer has a cost (depends on the cloud) and if you need to expose a lot of services the situation could become difficult to be maintained.
NodePort: you can specify a spec.serviceType: NodePort. Exposes the Service on each Node’s IP at a static port (the NodePort).
You’ll be able to contact the service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
Ingress: Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. This is the most common scenario for simple http/https application. It allow you to easily manage ssl termination and routing.
You need to deploy an ingress controller to make this work, like a simple nginx. All the main cloud can do this for you with a simple setting when you create the cluster
Read here for more information about services
Read here for more information about ingress

Resources