Reading environmental variables set in configmap of kubernetes pod from react application? - reactjs

I have a react application which reads a couple of API related environmental variables.
When running off a local machine or a VM, the API variables are read correctly into the application.
When hardcoded into the react application itself, the application runs too.
However, creating a pod in Kubernetes with the image and a configmap does not work - the application runs but the environmental variables are not set.
pod.yaml
...
spec:
containers:
- command:
- sleep
- "3600"
envFrom:
- configMapRef:
name: configmap
image: xxxxx
imagePullPolicy: IfNotPresent
...
configmap
apiVersion: v1
data:
API_HOST: xxxxxxx
SOME_ID: abcdef
NODE_ENV: development
PROVIDER: GCP
kind: ConfigMap
metadata:
creationTimestamp: xxxx
name: configmap
namespace: xxxx
resourceVersion: xxxx
selfLink: xxxx
uid: xxxx
React snippet
if(!process.env.SOME_ID) {
console.log('ID')
}
My trouble lies with passing the environmental variables to the React application. I am certain the environmental variables are setup correctly in the pods but seemingly, the client-side React application does not have these variables (i.e. console.log prints nothing).
I chanced upon this article doing something similar but with Docker. It mentions that the transpiling replaces all process.env with a string value. The trick given to mitigate this bash script which creates JavaScript file with environment variables assigned as properties of the global window object.
While I am unsure if this is doable in Kubernetes, I am wonder is there an easier way to inject environmental variables of a Kubernetes pod into a react application at runtime?

It doesn't work as you expect because the process.env variables are replaced during transpiling. You can't access them during runtime.
You can check this guide for one possible solution: https://www.freecodecamp.org/news/how-to-implement-runtime-environment-variables-with-create-react-app-docker-and-nginx-7f9d42a91d70/. But regarding your question, there is nothing wrong with your Kubernetes configuration.

Related

Azure StaticWebApps and behavior of dotEnv to get environment variables

Hi I created a basic project using Azure static web apps and added 2 env files in it ".env" and ".env.production".
I observed all the new Azure staticwebapps were being started in "production environment" by default.
So let's say theres a key "REACT_APP_API_URL", which have different values in both the env files. I was able to switch keys using by setting the appropriate environment.
After that, I wanted to test if environment variables can override these .env files. So I in the pipeline I added the environment variable to modify the key.
trigger:
- dev
pool:
vmImage: ubuntu-latest
steps:
- task: AzureStaticWebApp#0
inputs:
app_location: ""
output_location: "build"
env:
azure_static_web_apps_api_token: $(deployment_token)
REACT_APP_API_URL: "Some value diff than .env and .env.prod"
It does override the .env files. Can someone kindly explain, how it is able to do that?
I have checked Dot env package doc and I still dont understand which values take priority in case both the .env files and environment variables are present.

What's the best practice to expose ENV vars in a React JS app deployed on K8S?

I have a question regarding what are the best practices in managing environment variables for a React application deployed on K8S, like third-party service apiKeys from example.
Usually one could put environment variables inside the .env files, so to be picked during build phase, local or production. But we don't want to do the same while building Docker images as it would generate "hardwired" images, while the consensus/best-practice dictates that we should strictly separate code from configuration.
Containers should be agnostic to the environment in which they are to be deployed, after all.
To make thing works we wrote a docker-entrypoint.sh script where we take variables from the environment the container is run into, and we write those variables values into the window object, so that React runtime can access them.
To be more clear, this is the content of our docker-entrypoint.sh:
if [ -v VARIABLE_NAME ]; then
variable_name="window.VARIABLE_NAME = '${VARIABLE_NAME}';"
fi
echo "${variable_name}" > /usr/share/nginx/html/static/app-config.js
exec "$#"
And in the <head> section of our React's index.html we have this:
<script src="%PUBLIC_URL%/static/app-config.js"></script>
So all the variables are accessible via window.VARIABLE_NAME.
In our case we're taking env variables exposed into Pod by Kubernetes.
Our solution works, but we need to understand if there are better approaches.
These are useful links we followed:
https://12factor.net/config
https://docs.docker.com/engine/faq/#what-does-docker-technology-add-to-just-plain-lxc
you can store the key:value pairs of your ENVs in kubernetes secrets and expose them to your service as ENVs by referencing the secret(s) in your deployment.
secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
env1: value1
env2: value2
deployment.yaml:
kind: Deployment
apiVersion: apps/v1
metadata:
...
spec:
...
template:
...
spec:
...
containers:
- name: <app>
image: <image>
env:
- name: env1
valueFrom:
secretKeyRef:
name: mysecret
key: env1
- name: env2
valueFrom:
secretKeyRef:
name: mysecret
key: env2
...
reference: https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables
ConfigMap is what you are looking for to achieve it.
A ConfigMap is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.
A ConfigMap allows you to decouple environment-specific configuration from your container images, so that your applications are easily portable.
In short way ConfigMap takes variables from file and attach them to Pod.
You can get familiar in more details in another my answer. I wrote step by step with explanation in each step.
If you have sensitive data you can read about Secrets
A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. Such information might otherwise be put in a Pod specification or in a container image Using a Secret means that you don't need to include confidential data in your application code.

Using NFS with Ververica for artifact storage not working, throwing Error: No suitable artifact fetcher found for scheme file

I'm trying to setup Ververica community edition to use NFS for artifact storage using the following values.yaml
vvp:
blobStorage:
baseUri: file:///var/nfs/export
volumes:
- name: nfs-volume
nfs:
server: "host.docker.internal"
path: "/MOUNT_POINT"
volumeMounts:
- name: nfs-volume
mountPath: /var/nfs
When deploying the flink job, using job uri below:
jarUri: file:///var/nfs/artifacts/namespaces/default/flink-job.jar
I am able to see my artifacts in the Ververica UI, however when I try to deploy the flink job it fails with the following exception:
Error: No suitable artifact fetcher found for scheme file
Full error:
Some pod containers have been restarted unexpectedly. Init containers reported the following reasons: [Error: No suitable artifact fetcher found for scheme file]. Please check the Kubernetes pod logs if your application does not reach its desired state.
If I remove the "file://" from the jobURi to just the following the job containers keep restarting without giving error.
jarUri: /var/nfs/artifacts/namespaces/default/flink-job.jar
As a side note, I also added the following to the deployment.yaml, If I set the artifact to pull from an http endpoint it does save the checkpoints correctly in the NFS, so it seems that the only problem is loading artifacts from the nfs using file:// scheme.
kubernetes:
pods:
volumeMounts:
- name: my-volume
volume:
name: my-volume
nfs:
path: /MOUNT_POINT
server: host.docker.internal
volumeMount:
mountPath: /var/nfs
name: my-volume
Ververica Platform does not currently support NFS drives for Universal Blob Storage.
However, you can emulate this behavior if using version >= 2.3.2 by mounting the NFS drive to your Flink pods as you did in the deployment spec for checkpoints. This works because 2.3.2 added support for self-contained and fetching local files. You can see more information in the documentation here

How to execute a sql script file in a Kubernetes Pod?

I wanted to create a SQL Server database in Kubernetes pod using a SQL script file. I have the SQL script which creates the database and inserts the master data. As I'm new to Kubernetes, I'm struggling to run the SQL script in a pod. I know the SQL script can be executed manually in a separate kubectl exec command, but I wanted it to be executed automatically in the pod deploy yml file itself.
Is there a way to mount the script file into pod's volume and run it after starting the container?
You could use kubernetes hooks for that case. There are two of them: PostStart and PreStop.
PostStart executes immediately after a container is created.
PreStop on other hand is called immediately before a container is terminated.
You have two types of hook handlers that can be implemented: Exec or HTTP
Exec - Executes a specific command, such as pre-stop.sh, inside the cgroups and namespaces of the Container. Resources consumed by the command are counted against the Container.
HTTP - Executes an HTTP request against a specific endpoint on the Container.
PostStart is the one to go with here, however please note that the hook is running in parallel with the main process.
It does not wait for the main process to start up fully. Until the hook completes, the container will stay in waiting state.
You could use a little workaround for that and add a sleep command to your script in order to have it wait a bit for your main container creation.
Your script file can be stored in the container image or mounted to volume shared with the pod using ConfigMap. Here`s some examples how to do that:
kind: ConfigMap
apiVersion: v1
metadata:
namespace: <your-namespace>
name: poststarthook
data:
poststart.sh: |
#!/bin/bash
echo "It`s done"
Make sure your script does not exceed 1mb limit for ConfigMap
After you define configMap you will have mount it using volumes:
spec:
containers:
- image: <your-image>
name: example-container
volumeMounts:
- mountPath: /opt/poststart.sh
subPath: poststart.sh
name: hookvolume
volumes:
- name: hookvolume
configMap:
name: poststarthook
defaultMode: 0755 #please remember to add proper (executable) permissions
And then you can define postStart in your spec:
spec:
containers:
- name: example-container
image: <your-image>
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", /opt/poststart.sh ]
You can read more about hooks in kubernetes documentation and in this article. Let me know if that was helpful.

Centralised configuration of docker-compose services

Imagine a non-trivial docker compose app, with nginx in front of a webapp, and a few linked data stores:
web:
build: my-django-app
volumes:
- .:/code
ports:
- "8000:8000"
links:
- redis
- mysql
- mongodb
nginx:
image: nginx
links:
- web
redis:
image: redis
expose:
- "6379"
mysql:
image: mysql
volumes:
- /var/lib/mysql
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
- MYSQL_DATABASE=myproject
mongodb:
image: mongo
The databases are pretty easy to configure (for now), the containers expose pretty nice environmental variables to control them (see the mysql container), but what of nginx? We'll need to template a vhost file for that, right?
I don't want to roll my own image, that'll need rebuilding for each changed config, from different devs' setups, to test, through staging and production. And what if we want to, in a lightweight manner, do A/B testing by flipping a config option?
Some centralised config management is needed here, maybe something controlled by docker-compose that can write out config files to a shared volume?
This will only get more important as new services are added (imagine a microservice cloud, rather than, as in this example, a monolithic web app)
What is the correct way to manage configuration in a docker-compose project?
In general you'll find that most containers use entrypoint scripts to configure applications by populating configuration files using environment variables. For an advanced example of this approach see the entrypoint script for the Wordpress official image.
Because this is a common pattern, Jason Wilder created the dockerize project to help automate the process.

Resources