How to access Kubernetes container environment variables from React.js application? - reactjs

I have a create-react-app with default configurations.
I have some PORT and APIs inside .env file configured with
REACT_APP_PORT=3000
and using inside app with process.env.REACT_APP_PORT.
I have my server deployed on Kubernetes.
Can someone explain how to configure my create-react-app, to use environment variables provided by pods/containers?
I want to access cluster IP via Name given by kubectl svc
Update 1 :
I have the opposite scenario, I don't want my frontend env variables to be configured in kubernetes pod container, but want to use the pod's env variable
e.x CLUSTER_IP and CLUSTER_PORT with their name defined by pod's env variable inside my react app.
For eg.-
NAME TYPE CLUSTER-IP
XYZ ClusterIP x.y.z.a
and want to access XYZ in react app to point to the Cluster IP (x.y.z.a)

Use Pod fields as values for environment variables
apiVersion: v1
kind: Pod
metadata:
name: dapi-envars-fieldref
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv MY_NODE_NAME MY_POD_NAME MY_POD_NAMESPACE;
printenv MY_POD_IP MY_POD_SERVICE_ACCOUNT;
sleep 10;
done;
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
restartPolicy: Never
https://kubernetes.io/docs/tasks/inject-data-application/_print/
Maybe above example will help you.

try this:
kubectl create configmap react-config --from-literal=REACT_APP_PORT=3000
and then:
spec:
containers:
- name: create-react-app
image: gcr.io/google-samples/node-hello:1.0
envFrom:
- configMapRef:
name: react-config
Now you configured your env from "outside" the pod
See also the documentation: https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-container-environment-variables

Try Following
spec:
containers:
- name: create-react-app
image: gcr.io/google-samples/node-hello:1.0
env:
- name: REACT_APP_PORT
value: "3000"

Related

Deploying SQL Server in Kubernetes: context deadline exceeded while pulling mssql image attempt

When I'm trying to run SQL Server in kubernetes with the mcr.microsoft.com/mssql/server image in minikube cluster in several seconds I'm getting the following in logs:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 38h (x77 over 47h) kubelet Pulling image "mcr.microsoft.com/mssql/server"
Normal BackOff 38h (x1658 over 47h) kubelet Back-off pulling image "mcr.microsoft.com/mssql/server"
Warning Failed 38h (x79 over 47h) kubelet Failed to pull image "mcr.microsoft.com/mssql/server": rpc error: code = Unknown desc = context deadline exceeded
Pulling and running the image in docker desktop works fine.
What I've already tried:
Specifying a tag like :2019-latest;
Specifying an imagePullPolicy like IfNotPresent or Never. Seems like even after pulling the image via powershell directly kubernetes doesn't see it locally (but docker does).
I suspect the reason is that the image is too large and kubernetes has too short timeout settings by default. But I'm a newbie with kubernetes and haven't checked this yet. At least, I don't see anything about it in SQL Server examples.
Here's the deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mssql-deployment
namespace: mynamespace
spec:
replicas: 1
selector:
matchLabels:
app: mssql
strategy:
type: Recreate
template:
metadata:
labels:
app: mssql
spec:
terminationGracePeriodSeconds: 10
containers:
- image: mcr.microsoft.com/mssql/server
name: mssql
env:
- name: ACCEPT_EULA
value: "Y"
- name: SA_PASSWORD
valueFrom:
secretKeyRef:
name: mssql
key: SA_PASSWORD
ports:
- containerPort: 1433
name: mssql
securityContext:
privileged: true
volumeMounts:
- name: mssqldb
mountPath: /var/opt/mssql
volumes:
- name: mssqldb
persistentVolumeClaim:
claimName: mysql-pv-claim
service.yaml
apiVersion: v1
kind: Service
metadata:
name: mssql-deployment
namespace: mynamespace
spec:
ports:
- protocol: TCP
port: 1433
targetPort: 1433
selector:
app: mssql
type: LoadBalancer
pvc.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Could you, please, help me figure out what I'm doing wrong? Let me know if you need more details.
Thank you!

kubectl secrets not accessible in React app Pods

I've tried to avoid, but really haven't had the need, to set specific env vars for a React FE. But I'm working on social authentication, with Azure AD specifically, and now I do have a use case for it.
I acknowledge the AAD_TENANT_ID and AAD_CLIENT_ID aren't exactly "secret" or sensitive information and will be compiled in the JS, but I'm trying to do this for a few reasons:
I can more easily manage dev and prod keys from a Key Vault...
Having environment independent code (i.e., process.env.AAD_TENANT_ID will work whether it is dev or prod).
But it doesn't work.
The issue I'm running into is that the env vars are not accessible at process.env despite having the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: admin-v2-deployment-dev
namespace: dev
spec:
replicas: 1
revisionHistoryLimit: 5
selector:
matchLabels:
component: admin-v2
template:
metadata:
labels:
component: admin-v2
spec:
containers:
- name: admin-v2
image: admin-v2
ports:
- containerPort: 4001
env:
- name: AAD_CLIENT_ID
valueFrom:
secretKeyRef:
name: app-dev-secrets
key: AAD_CLIENT_ID
- name: AAD_TENANT_ID
valueFrom:
secretKeyRef:
name: app-dev-secrets
key: AAD_TENANT_ID
---
apiVersion: v1
kind: Service
metadata:
name: admin-v2-cluster-ip-service-dev
namespace: dev
spec:
type: ClusterIP
selector:
component: admin-v2
ports:
- port: 4001
targetPort: 4001
When I do the following anywhere in the code, it comes back undefined:
console.log(process.env.AAD_CLIENT_ID);
console.log(process.env.AAD_TENANT_ID);
The values are definitely there when I check secrets in the namespace and in the Pod itself:
Environment:
AAD_CLIENT_ID: <set to the key 'AAD_CLIENT_ID' in secret 'app-dev-secrets'> Optional: false
AAD_TENANT_ID: <set to the key 'AAD_TENANT_ID' in secret 'app-dev-secrets'> Optional: false
So how should one go about getting kubectl secrets into React Pods?
I am guessing you are using create-react-app app for React FE. You have to make sure that your environment variables starts with REACT_APP_ else it will be ignored inside app.
According to create-react-app documentation
Note: You must create custom environment variables beginning with REACT_APP_.
Any other variables except NODE_ENV will be ignored to avoid accidentally
exposing a private key on the machine that could have the same name.
Source - https://create-react-app.dev/docs/adding-custom-environment-variables/

Connect to local SQL Server Express database from inside minikube cluster

I'm trying to access my SQL Server Express database hosted on my local machine from inside a minikube pod. I tried to follow the idea describe on kubernetes official doc. While I am inspecting my container I found out that my application got crashed every time I am creating POD because the application is unable to connect to the local database.
This is my config:
apiVersion: v1
kind: Service
metadata:
name: sqlserver-svc
spec:
ports:
- protocol: TCP
port: 1443
targetPort: 1433
===========================
apiVersion: v1
kind: Endpoints
metadata:
name: sqlserver-svc
subsets:
- addresses:
- ip: 192.168.0.101
ports:
- port: 1433
======== Application container ==========
apiVersion: v1
kind: Pod # object type -> that will reside kubernetes cluster
metadata:
name: api-pod
labels:
component: web
spec:
containers:
- name: api
image: nayan2/simptekapi
ports:
- containerPort: 5000
env:
- name: DATABASE_HOST
value: "sqlserver-svc:1443\\SQLEXPRESS"
- name: DATABASE_PORT
value: '1433'
- name: DATABASE_USER
value: sa
- name: DATABASE_PW
value: 1234
- name: JWT_SECRET
value: sec
- name: NODE_ENV
value: production
================================
apiVersion: v1
kind: Service
metadata:
name: api-node-port
spec:
type: NodePort
ports:
- port: 4200
targetPort: 5000
nodePort: 31515
selector:
component: web
It is obvious that I am doing something wrong. I am relatively new with docker/container and Kubernetes technology. Still learning. Can anybody help me with this??

Accessing container mounted volumes in Kubernetes from docker container

I am currently trying to access a file mounted in my Kubernetes container from a docker image. I need to pass the file in with a flag when my docker image is run.
The docker image is usually run (outside a container) using the command:
docker run -p 6688:6688 -v ~/.chainlink-ropsten:/chainlink -it --env-file=.env smartcontract/chainlink local n -p /chainlink/.password -a /chainlink/.api
Now I have sucessfully used the following config to mount my env, password and api files at /chainlink, but when attempting to access the files during the docker run I get the error:
flag provided but not defined: -password /chainlink/.password
The following is my current Kubernetes Deployment file
kind: Deployment
metadata:
name: chainlink-deployment
labels:
app: chainlink-node
spec:
replicas: 1
selector:
matchLabels:
app: chainlink-node
template:
metadata:
labels:
app: chainlink-node
spec:
containers:
- name: chainlink
image: smartcontract/chainlink:latest
args: [ "local", "n", "--password /chainlink/.password", "--api /chainlink/.api"]
ports:
- containerPort: 6689
volumeMounts:
- name: config-volume
mountPath: /chainlink/.env
subPath: .env
- name: api-volume
mountPath: /chainlink/.api
subPath: .api
- name: password-volume
mountPath: /chainlink/.password
subPath: .password
volumes:
- name: config-volume
configMap:
name: node-env
- name: api-volume
configMap:
name: api-env
- name: password-volume
configMap:
name: password-env
Is there some definition I am missing in my file that allows me to access the mounted volumes when running my docker image?
Change your args to:
args: [ "local", "n", "--password", "/chainlink/.password", "--api", "/chainlink/.api"]
The way you currently have it, it's thinking the whole string --password /chainlink/.password, include the space, is a single flag. That's what the error:
flag provided but not defined: -password /chainlink/.password
is telling you.

How to provision persistent volume claim for software install in kubernetes

I am trying to provision PVC for Solr deployment in k8s and mount it as /opt/solr, which is default Solr installation directory. This way I plan to target both Solr installation and data under it on PVC. However, while storage gets provisioned just fine and statefulset gets created, my deployment doesn't work because /opt/solr ends up empty. What is a proper way to do it? Here my deployment.yaml:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: solr
labels:
app: solr
spec:
volumeClaimTemplates:
- metadata:
name: datadir
annotations:
volume.alpha.kubernetes.io/storage-class: slow
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
serviceName: solr-svc
replicas: 1
template:
metadata:
labels:
app: solr
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- solr-pool
topologyKey: "kubernetes.io/hostname"
terminationGracePeriodSeconds: 300
containers:
- name: solr
image: solr:6.5.1
imagePullPolicy: IfNotPresent
resources:
requests:
memory: 512M
cpu: 500m
ports:
- containerPort: 8983
name: solr-port
protocol: TCP
env:
- name: VERBOSE
value: "yes"
command:
- bash
- -c
- "exec /opt/solr/bin/solr start"
volumeMounts:
- name: solr-script
mountPath: /docker-entrypoint-initdb.d/
- name: datadir
mountPath: /opt/solr/
volumes:
- name: solr-script
configMap:
name: solr-configs
nodeSelector:
pool: solr-pool
Provisioned storage is empty by default and there might be a Deleting retain policy on provisioned storage be sure to check those configurations. You can also exec to your pod and examine mounted volume and see if it's working properly or not (permission issues, read only file system)
In may case there was a conflict with docker container configuration which used /opt/solr as a location for solr install and my attempt to mount separate PV under same location. Once this PV is mounted obviously I loose solr install. The fixes for this are:
create another docker image which uses separate location
change solr config to use different location for data
change PV volume location

Resources