I have a react app which has some environment specific configuration (API Endpoint, User and Password). In my local development environment I pass this by local environment variables and can use them.
export default axios.create({
baseURL: process.env["REACT_APP_MANAGEMENT_BASE_URL"],
auth: {
username: process.env["REACT_APP_MANAGEMENT_USER"]!.toString(),
password: process.env["REACT_APP_MANAGEMENT_PASSWORD"]!.toString()
}
})
Everything works fine, when running locally. But when deploying it to OpenShift, the environment variables are not picked up.
Here is the relevant snippet from my deployment config:
spec:
containers:
- image: frontend-image
name: frontend
imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
envFrom:
- configMapRef:
name: frontend-env-configs
- secretRef:
name: userdata
frontend-env-configs an userdata is created like this:
configMapGenerator:
- behavior: create
literals:
- TZ=Europe/Berlin
- REACT_APP_MANAGEMENT_BASE_URL=https://api.dev/
name: frontend-env-configs
secretGenerator:
- name: techn-user
envs:
- secrets/techn_user.env
techn_user.env holds the encrypted Username / Password which is created as secret in OpenShift.
But when I access the application I get an error on Browser console, that this environment variables are not there.
When I go to the console of my pod in OpenShift I can see the environment variables:
sh-4.4$ env | grep REACT
REACT_APP_MANAGEMENT_BASE_URL=https://api.dev/
REACT_APP_MANAGEMENT_USER=myUser
REACT_APP_MANAGEMENT_PASSWORD=passwordOfUser
Related
I've a simle React JS application and it's using a environment variable(REACT_APP_BACKEND_SERVER_URL) defined in .env file. Now I'm trying to deploy this application to minikube using Kubernetes.
This is my deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-ui
spec:
replicas: 1
selector:
matchLabels:
app: test-ui
template:
metadata:
name: test-ui-pod
labels:
app: test-ui
spec:
containers:
- name: test-ui
image: test-ui:1.0.2
ports:
- containerPort: 80
env:
- name: "REACT_APP_BACKEND_SERVER_URL"
value: "http://127.0.0.1:59058"
When I run the application, it's working but REACT_APP_BACKEND_SERVER_URL is giving the value which I defined in .env file. Not the one I'm overriding. Can someone help me with this please? How to override the env variable using Kubernetes deployment?
After starting the app with your deployment YAML and checking for the environment variables I see the environment variables for that environment variable.
REACT_APP_BACKEND_SERVER_URL=http://127.0.0.1:59058
you can check that by doing an kubectl exec -it <pod-name> -- sh and running env command.
So you can see that REACT_APP_BACKEND_SERVER_URL is there in the environment variables. It's available for your application to use. I suspect that you may need to understand better from the React app side on how to use the .env file.
I've tried to avoid, but really haven't had the need, to set specific env vars for a React FE. But I'm working on social authentication, with Azure AD specifically, and now I do have a use case for it.
I acknowledge the AAD_TENANT_ID and AAD_CLIENT_ID aren't exactly "secret" or sensitive information and will be compiled in the JS, but I'm trying to do this for a few reasons:
I can more easily manage dev and prod keys from a Key Vault...
Having environment independent code (i.e., process.env.AAD_TENANT_ID will work whether it is dev or prod).
But it doesn't work.
The issue I'm running into is that the env vars are not accessible at process.env despite having the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: admin-v2-deployment-dev
namespace: dev
spec:
replicas: 1
revisionHistoryLimit: 5
selector:
matchLabels:
component: admin-v2
template:
metadata:
labels:
component: admin-v2
spec:
containers:
- name: admin-v2
image: admin-v2
ports:
- containerPort: 4001
env:
- name: AAD_CLIENT_ID
valueFrom:
secretKeyRef:
name: app-dev-secrets
key: AAD_CLIENT_ID
- name: AAD_TENANT_ID
valueFrom:
secretKeyRef:
name: app-dev-secrets
key: AAD_TENANT_ID
---
apiVersion: v1
kind: Service
metadata:
name: admin-v2-cluster-ip-service-dev
namespace: dev
spec:
type: ClusterIP
selector:
component: admin-v2
ports:
- port: 4001
targetPort: 4001
When I do the following anywhere in the code, it comes back undefined:
console.log(process.env.AAD_CLIENT_ID);
console.log(process.env.AAD_TENANT_ID);
The values are definitely there when I check secrets in the namespace and in the Pod itself:
Environment:
AAD_CLIENT_ID: <set to the key 'AAD_CLIENT_ID' in secret 'app-dev-secrets'> Optional: false
AAD_TENANT_ID: <set to the key 'AAD_TENANT_ID' in secret 'app-dev-secrets'> Optional: false
So how should one go about getting kubectl secrets into React Pods?
I am guessing you are using create-react-app app for React FE. You have to make sure that your environment variables starts with REACT_APP_ else it will be ignored inside app.
According to create-react-app documentation
Note: You must create custom environment variables beginning with REACT_APP_.
Any other variables except NODE_ENV will be ignored to avoid accidentally
exposing a private key on the machine that could have the same name.
Source - https://create-react-app.dev/docs/adding-custom-environment-variables/
I am developing a ReactJS application that is calling REST APIs running in kubernetes.
The setup is as follows:
ReactJS being developed/debugged locally and ran with "npm start" because nothing beats how fast the local development server detects changes and reload the browser when changes are detected.
ReactJS API requests are done with axios
Backend APIs written in GO running as separate deployment/services locally in minikube.
There is an Ingress installed locally in minikube to forward requests from urlshortner.local to the respective k8s service.
The basic idea is the following:
ReactJS -> k8s ingress -> GO REST API
Now the problem starts when I try to set secure httpOnly cookies. Because the cookie needs to be secure, I created a self signed ssl certificate and applied it to be used by the ingress. I also enabled CORS settings in the ingress configuration. I also configured axios to not reject self signed certificates.
For some reason that is unknown to me I can't success in making the request.
Below are my relevant config files and code snippets:
k8s ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: url-shortner-backend-services
namespace: default
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: "https://localhost:4000"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
spec:
tls:
- secretName: urlshortner-local-tls
hosts:
- urlshortner.local
rules:
- host: urlshortner.local
http:
paths:
- path: /shortner(/|$)(.*)
backend:
serviceName: url-shortener-service
servicePort: 3000
- path: /auth(/|$)(.*)
backend:
serviceName: auth-service
servicePort: 3000
The react application start scripts:
PORT=4000 SSL_CRT_FILE=tls.crt SSL_KEY_FILE=tls.key react-scripts start
The axios code snippet that creates an axios instance that is used to issue a POST request
import axios from "axios";
import https from "https";
export default axios.create({
baseURL: 'https://urlshortner.local',
withCredentials: true,
httpsAgent: new https.Agent({
rejectUnauthorized: false
})
});
When a POST request is made, I see the following error in the browser console/network tab even though when I first load the page I am accepting the certificate warning and adding it as a trusted certificate:
The end result that I would like to achieve is to be able to set a cookie and read the cookie on subsequent requests.
The cookie is being set as follows:
c.SetSameSite(http.SameSiteNoneMode)
c.SetCookie("token", resp.Token, 3600, "/", "localhost:4000", true, true)
What is missing? What am I doing wrong?
Thanks in advance
I finally managed to fix this issue and the good news is that you don't need to create a self signed certificate.
The steps are the following:
set a HOST environment variable before starting your development react server.
adjust /etc/hosts so that 127.0.0.1 points to the value set in the HOST environment variable
adjust your k8s ingress CORS settings to allow "cors-allow-origin" from the domain set in the HOST environment variable
setting cookies should now work as expected.
Below are the relevant code snippets:
npm start script
"scripts": {
"start": "PORT=4000 HOST=app.urlshortner.local react-scripts start",
}
notice the HOST environment variable, the PORT environment variable is optional, I'm using it because the default port 3000 is already taken.
/etc/hosts
127.0.0.1 app.urlshortner.local
192.168.99.106 urlshortner.local
note that 192.168.99.106 is my local minikube ip address.
Kubernetes ingress configuration
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: url-shortner-backend-services
namespace: default
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: "http://app.urlshortner.local:4000"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
spec:
rules:
- host: urlshortner.local
http:
paths:
- path: /shortner(/|$)(.*)
backend:
serviceName: url-shortener-service
servicePort: 3000
- path: /auth(/|$)(.*)
backend:
serviceName: auth-service
servicePort: 3000
What matters here is the following:
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: "http://app.urlshortner.local:4000"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
axios instance used
import axios from "axios";
let baseURL = '';
if (process.env.NODE_ENV === 'development') {
baseURL = 'http://urlshortner.local';
}
export default axios.create({
baseURL,
withCredentials: true
});
How the cookie is set:
c.SetCookie("token", resp.Token, 3600, "/", ".urlshortner.local", false, true)
note the domain used. It starts with a "."
I hope this helps someone.
I am basically trying to run a react js app which is mainly composed of 3 services namely postgres db, API server and UI frontend(served using nginx).Currently the app works as expected in the development mode using docker-compose but when i tried to run this in the production using kubernetes,I was not able to access the api server of the app(CONNECTION REFUSED).
Since I want to run in this in production using kubernetes, I created yaml files for each of the services and then tried running them using kubectl apply.I have also tried this with and without using the persistent volume for the api server.But none of this helped.
Docker-compose file(This works and i am able to connect to api server at port 8000)
version: "3"
services:
pg_db:
image: postgres
networks:
- wootzinternal
ports:
- 5432
environment:
- POSTGRES_PASSWORD=password
- POSTGRES_USER=postgres
- POSTGRES_DB=wootz
volumes:
- wootz-db:/var/lib/postgresql/data
apiserver:
image: wootz-backend
volumes:
- ./api:/usr/src/app
- /usr/src/app/node_modules
build:
context: ./api
dockerfile: Dockerfile
networks:
- wootzinternal
depends_on:
- pg_db
ports:
- '8000:8000'
ui:
image: wootz-frontend
volumes:
- ./client:/usr/src/app
- /usr/src/app/build
- /usr/src/app/node_modules
build:
context: ./client
dockerfile: Dockerfile
networks:
- wootzinternal
ports:
- '80:3000'
volumes:
wootz-db:
networks:
wootzinternal:
driver: bridge
My api server yaml for running in kubernetes(This doesn't work and I cant connect to the api server at port 8000)
apiVersion: v1
kind: Service
metadata:
name: apiserver
labels:
app: apiserver
spec:
ports:
- name: apiport
port: 8000
targetPort: 8000
selector:
app: apiserver
tier: backend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: apiserver
labels:
app: apiserver
spec:
selector:
matchLabels:
app: apiserver
tier: backend
strategy:
type: Recreate
template:
metadata:
labels:
app: apiserver
tier: backend
spec:
containers:
- image: suji165475/devops-sample:corspackedapi
name: apiserver
env:
- name: POSTGRES_DB_USER
value: postgres
- name: POSTGRES_DB_PASSWORD
value: password
- name: POSTGRES_DB_HOST
value: postgres
- name: POSTGRES_DB_PORT
value: "5432"
ports:
- containerPort: 8000
name: myport
What changes should I make to my api server yaml(kubernetes). so that I can access it on port 8000. Currently I am getting a connection refused error.
The default service on Kubernetes is ClusterIP that is used to have service inside the cluster but not having that exposed to outside.
That is your service using the LoadBalancer type:
apiVersion: v1
kind: Service
metadata:
name: apiserver
labels:
app: apiserver
spec:
type: LoadBalancer
ports:
- name: apiport
port: 8000
targetPort: 8000
selector:
app: apiserver
tier: backend
With that, you can see how the service expects to have an external IP address by running kubectl describe service apiserver
In case you want to have more control of how your requests are routed to that service you can add an Ingress in front of that same service.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: apiserver
name: apiserver
spec:
rules:
- host: apiserver.example.com
http:
paths:
- backend:
serviceName: apiserver
servicePort: 8000
path: /*
Your service in only exposed over the internal kubernetes network.
This is because if you do not specify a spec.serviceType, the default is ClusterIP.
To expose your application you can follow at least 3 ways:
LoadBalancer: you can specify a spec.serviceType: LoadBalancer. A Load Balancer service expose your application on the (public) network. This work great if your cluster is a cloud service (gke, digital ocean, aks, azure, ...), the cloud will take care of providing you the public ip and routing the network traffic to all your nodes. Usually this is not the best method because a cloud Load balancer has a cost (depends on the cloud) and if you need to expose a lot of services the situation could become difficult to be maintained.
NodePort: you can specify a spec.serviceType: NodePort. Exposes the Service on each Node’s IP at a static port (the NodePort).
You’ll be able to contact the service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
Ingress: Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. This is the most common scenario for simple http/https application. It allow you to easily manage ssl termination and routing.
You need to deploy an ingress controller to make this work, like a simple nginx. All the main cloud can do this for you with a simple setting when you create the cluster
Read here for more information about services
Read here for more information about ingress
The Error
When deploying to Azure Web Apps with Multi-container support, I receive an "Invalid Host Header" message from https://mysite.azurewebsites.com
Local Setup
This runs fine.
I have two Docker containers: client a React app and server an Express app hosting my API. I am using a proxy to host my API on server.
In client's package.json I have defined:
"proxy": "http://localhost:3001"
I use the following docker compose file to build locally.
version: '2.1'
services:
server:
build: ./server
expose:
- ${APP_SERVER_PORT}
environment:
API_HOST: ${API_HOST}
APP_SERVER_PORT: ${APP_SERVER_PORT}
ports:
- ${APP_SERVER_PORT}:${APP_SERVER_PORT}
volumes:
- ./server/src:/app/project-server/src
command: npm start
client:
build: ./client
environment:
- REACT_APP_PORT=${REACT_APP_PORT}
expose:
- ${REACT_APP_PORT}
ports:
- ${REACT_APP_PORT}:${REACT_APP_PORT}
volumes:
- ./client/src:/app/project-client/src
- ./client/public:/app/project-client/public
links:
- server
command: npm start
Everything runs fine.
On Azure
When deploying to Azure I have the following. client and server images have been stored in Azure Container Registry. They appear to load just fine from the logs.
In my App Service > Container Settings I am loading the images from Azure Container Registry (ACR) and I'm using the following configuration (Docker compose) file.
version: '2.1'
services:
client:
image: <clientimage>.azurecr.io/clientimage:v1
build: ./client
expose:
- 3000
ports:
- 3000:3000
command: npm start
server:
image: <serverimage>.azurecr.io/<serverimage>:v1
build: ./server
expose:
- 3001
ports:
- 3001:3001
command: npm start
I have also defined in Application Settings:
WEBSITES_PORT to be 3000.
This results in the error on my site "Invalid Host Header"
Things I've tried
• Serving the app from the static folder in server. This works in that it serves the app, but it messes up my authentication. I need to be able to serve the static portion from client's App.js and have that talk to my Express API for database calls and authentication.
• In my docker-compose file binding the front end to:
ports:
- 3000:80
• A few other port combinations but no luck.
Also, I think this has something to do with the proxy in client's package.json based on this repo
Any help would be greatly appreciated!
Update
It is the proxy setting.
This somewhat solves it. By removing "proxy": "http://localhost:3001" I am able to load the website, but the suggested answer in the problem does not work for me. i.e. I am now unable to access my API.
Never used azure before and I also don't use a proxy (due to its random connection issues), but if your application is basically running express, you can utilize cors. (As a side note, it's more common to run your express server on 5000 than 3001.)
I first set up an env/config.js folder and file like so:
module.exports = {
development: {
database: 'mongodb://localhost/boilerplate-dev-db',
port: 5000,
portal: 'http://localhost:3000',
},
production: {
database: 'mongodb://localhost/boilerplate-prod-db',
port: 5000,
portal: 'http://example.com',
},
staging: {
database: 'mongodb://localhost/boilerplate-staging-db',
port: 5000,
portal: 'http://localhost:3000',
}
};
Then, depending on the environment, I can implement cors where I'm defining express middleware:
const cors = require('cors');
const config = require('./path/to/env/config.js');
const env = process.env.NODE_ENV;
app.use(
cors({
credentials: true,
origin: config[env].portal,
}),
);
Please note the portal and the AJAX requests MUST have matching host names. For example, if my application is hosted on http://example.com, my front-end API requests must be making requests to http://example.com/api/ (not http://localhost:3000/api/ -- click here to see how I implement it for my website), and the portal env must match the host name http://example.com. This set up is flexible and necessary when running multiple environments.
Or if you're using the create-react-app, then simply eject your app and implement a proxy inside the webpack production configuration.
Or migrate your application to my fullstack boilerplate, which implements the cors example above.
So, I ended up having to move off of containers and serve the React app up in more of a typical MERN architecture with the Express server hosting the React app from the static build folder. I set up some routes with PassportJS to handle my authentication.
Not my preferred solution, I would have preferred to use containers, but this works. Hope this points someone out there in the right direction!