Accessing container mounted volumes in Kubernetes from docker container - file

I am currently trying to access a file mounted in my Kubernetes container from a docker image. I need to pass the file in with a flag when my docker image is run.
The docker image is usually run (outside a container) using the command:
docker run -p 6688:6688 -v ~/.chainlink-ropsten:/chainlink -it --env-file=.env smartcontract/chainlink local n -p /chainlink/.password -a /chainlink/.api
Now I have sucessfully used the following config to mount my env, password and api files at /chainlink, but when attempting to access the files during the docker run I get the error:
flag provided but not defined: -password /chainlink/.password
The following is my current Kubernetes Deployment file
kind: Deployment
metadata:
name: chainlink-deployment
labels:
app: chainlink-node
spec:
replicas: 1
selector:
matchLabels:
app: chainlink-node
template:
metadata:
labels:
app: chainlink-node
spec:
containers:
- name: chainlink
image: smartcontract/chainlink:latest
args: [ "local", "n", "--password /chainlink/.password", "--api /chainlink/.api"]
ports:
- containerPort: 6689
volumeMounts:
- name: config-volume
mountPath: /chainlink/.env
subPath: .env
- name: api-volume
mountPath: /chainlink/.api
subPath: .api
- name: password-volume
mountPath: /chainlink/.password
subPath: .password
volumes:
- name: config-volume
configMap:
name: node-env
- name: api-volume
configMap:
name: api-env
- name: password-volume
configMap:
name: password-env
Is there some definition I am missing in my file that allows me to access the mounted volumes when running my docker image?

Change your args to:
args: [ "local", "n", "--password", "/chainlink/.password", "--api", "/chainlink/.api"]
The way you currently have it, it's thinking the whole string --password /chainlink/.password, include the space, is a single flag. That's what the error:
flag provided but not defined: -password /chainlink/.password
is telling you.

Related

Deploying SQL Server in Kubernetes: context deadline exceeded while pulling mssql image attempt

When I'm trying to run SQL Server in kubernetes with the mcr.microsoft.com/mssql/server image in minikube cluster in several seconds I'm getting the following in logs:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 38h (x77 over 47h) kubelet Pulling image "mcr.microsoft.com/mssql/server"
Normal BackOff 38h (x1658 over 47h) kubelet Back-off pulling image "mcr.microsoft.com/mssql/server"
Warning Failed 38h (x79 over 47h) kubelet Failed to pull image "mcr.microsoft.com/mssql/server": rpc error: code = Unknown desc = context deadline exceeded
Pulling and running the image in docker desktop works fine.
What I've already tried:
Specifying a tag like :2019-latest;
Specifying an imagePullPolicy like IfNotPresent or Never. Seems like even after pulling the image via powershell directly kubernetes doesn't see it locally (but docker does).
I suspect the reason is that the image is too large and kubernetes has too short timeout settings by default. But I'm a newbie with kubernetes and haven't checked this yet. At least, I don't see anything about it in SQL Server examples.
Here's the deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mssql-deployment
namespace: mynamespace
spec:
replicas: 1
selector:
matchLabels:
app: mssql
strategy:
type: Recreate
template:
metadata:
labels:
app: mssql
spec:
terminationGracePeriodSeconds: 10
containers:
- image: mcr.microsoft.com/mssql/server
name: mssql
env:
- name: ACCEPT_EULA
value: "Y"
- name: SA_PASSWORD
valueFrom:
secretKeyRef:
name: mssql
key: SA_PASSWORD
ports:
- containerPort: 1433
name: mssql
securityContext:
privileged: true
volumeMounts:
- name: mssqldb
mountPath: /var/opt/mssql
volumes:
- name: mssqldb
persistentVolumeClaim:
claimName: mysql-pv-claim
service.yaml
apiVersion: v1
kind: Service
metadata:
name: mssql-deployment
namespace: mynamespace
spec:
ports:
- protocol: TCP
port: 1433
targetPort: 1433
selector:
app: mssql
type: LoadBalancer
pvc.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Could you, please, help me figure out what I'm doing wrong? Let me know if you need more details.
Thank you!

Network IP from Docker not working for React + Vite.js therefore can't access k8s pod

I have a very simple react typescript application and using Vite for the first time to replace Webpack.
I have the following vite.config.js:
server: {
watch: {
usePolling: true,
},
open: false,
host: '0.0.0.0',
},
and created a Dockerfile with these instructions:
FROM node:latest
WORKDIR /usr/src/app
COPY package.json package-lock.json ./
COPY ./build-prod ./build-prod
COPY ./node_modules ./node_modules
RUN npm install husky -g --production
RUN npm install esbuild-linux-arm64 --production
COPY . .
EXPOSE 3000
CMD ["npm", "run", "dev"]
When I now run docker run -p 3000:3000 hello-world-app-frontend I can access my app with http://localhost:3000/ but opening the network address http://172.17.0.3:3000/ just loads an untitled window.
I think this is especially a problem for me as I want to create a basic Kubernetes config like this:
kind: Deployment
apiVersion: apps/v1
metadata:
name: hello-world-app-frontend
spec:
replicas: 2
selector:
matchLabels:
app: hello-world-app-frontend
template:
metadata:
labels:
app: hello-world-app-frontend
spec:
containers:
- name: hello-world-app-frontend
image: hello-world-app-frontend
imagePullPolicy: Never
ports:
- containerPort: 3000
restartPolicy: Always
kind: Service
apiVersion: v1
metadata:
name: hello-world-app-frontend
spec:
type: NodePort
ports:
- port: 3000
targetPort: 3000
protocol: TCP
nodePort: 31000
selector:
app: hello-world-app-frontend
But opening the IP address from my Pod returns nothing in my Chrome (f.e. http://10.106.213.128:3000/).
NAMESPACE NAME READY STATUS RESTARTS AGE
default pod/hello-world-app-frontend-77899b46d7-cc4td 1/1 Running 0 16h
default pod/hello-world-app-frontend-77899b46d7-vqtbz 1/1 Running 0 16h
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/hello-world-app-frontend NodePort 10.106.213.128 <none> 3000:31000/TCP 16h
Can somebody give me a few hints how I can access the React application from my k8s pod?

How to access Kubernetes container environment variables from React.js application?

I have a create-react-app with default configurations.
I have some PORT and APIs inside .env file configured with
REACT_APP_PORT=3000
and using inside app with process.env.REACT_APP_PORT.
I have my server deployed on Kubernetes.
Can someone explain how to configure my create-react-app, to use environment variables provided by pods/containers?
I want to access cluster IP via Name given by kubectl svc
Update 1 :
I have the opposite scenario, I don't want my frontend env variables to be configured in kubernetes pod container, but want to use the pod's env variable
e.x CLUSTER_IP and CLUSTER_PORT with their name defined by pod's env variable inside my react app.
For eg.-
NAME TYPE CLUSTER-IP
XYZ ClusterIP x.y.z.a
and want to access XYZ in react app to point to the Cluster IP (x.y.z.a)
Use Pod fields as values for environment variables
apiVersion: v1
kind: Pod
metadata:
name: dapi-envars-fieldref
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv MY_NODE_NAME MY_POD_NAME MY_POD_NAMESPACE;
printenv MY_POD_IP MY_POD_SERVICE_ACCOUNT;
sleep 10;
done;
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
restartPolicy: Never
https://kubernetes.io/docs/tasks/inject-data-application/_print/
Maybe above example will help you.
try this:
kubectl create configmap react-config --from-literal=REACT_APP_PORT=3000
and then:
spec:
containers:
- name: create-react-app
image: gcr.io/google-samples/node-hello:1.0
envFrom:
- configMapRef:
name: react-config
Now you configured your env from "outside" the pod
See also the documentation: https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-container-environment-variables
Try Following
spec:
containers:
- name: create-react-app
image: gcr.io/google-samples/node-hello:1.0
env:
- name: REACT_APP_PORT
value: "3000"

How to connect front to back in k8s cluster internal (connection refused)

Error while trying to connect React frontend web to nodejs express api server into kubernetes cluster.
Can navigate in browser to http:localhost:3000 and web site is ok.
But can't navigate to http:localhost:3008 as expected (should not be exposed)
My goal is to pass REACT_APP_API_URL environment variable to frontend in order to set axios baseURL and be able to establish communication between front and it's api server.
deploy-front.yml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: gbpd-front
spec:
selector:
matchLabels:
app: gbpd-api
tier: frontend
track: stable
replicas: 2
template:
metadata:
labels:
app: gbpd-api
tier: frontend
track: stable
spec:
containers:
- name: react
image: binomio/gbpd-front:k8s-3
ports:
- containerPort: 3000
resources:
limits:
memory: "150Mi"
requests:
memory: "100Mi"
imagePullPolicy: Always
service-front.yaml
apiVersion: v1
kind: Service
metadata:
name: gbpd-front
spec:
selector:
app: gbpd-api
tier: frontend
ports:
- protocol: "TCP"
port: 3000
targetPort: 3000
type: LoadBalancer
Deploy-back.yaml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: gbpd-api
spec:
selector:
matchLabels:
app: gbpd-api
tier: backend
track: stable
replicas: 3 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: gbpd-api
tier: backend
track: stable
spec:
containers:
- name: gbpd-api
image: binomio/gbpd-back:dev
ports:
- name: http
containerPort: 3008
service-back.yaml
apiVersion: v1
kind: Service
metadata:
name: gbpd-api
spec:
selector:
app: gbpd-api
tier: backend
ports:
- protocol: "TCP"
port: 3008
targetPort: http
I tried many combinations, also tried adding "LoadBalancer" to backservice but nothing...
I can connect perfecto to localhost:3000 and use frontend but frontend can't connect to backend service.
Question 1: What's is the ip/name to use in order to pass REACT_APP_API_URL to fronten correctly?
Question 2: Why is curl localhost:3008 not answering?
After 2 days trying almost everything in k8s official docs... can't figure out what's happening here, so any help will be much appreciated.
kubectl describe svc gbpd-api
Response:
kubectl describe svc gbpd-api
Name: gbpd-api
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"gbpd-api","namespace":"default"},"spec":{"ports":[{"port":3008,"p...
Selector: app=gbpd-api,tier=backend
Type: LoadBalancer
IP: 10.107.145.227
LoadBalancer Ingress: localhost
Port: <unset> 3008/TCP
TargetPort: http/TCP
NodePort: <unset> 31464/TCP
Endpoints: 10.1.1.48:3008,10.1.1.49:3008,10.1.1.50:3008
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I tested your environment, and it worked when using a Nginx image, let's review the environment:
The front-deployment is correctly described.
The front-service exposes it as loadbalancer, meaning your frontend is accessible from outside, perfect.
The back deployment is also correctly described.
The backend-service stays with as ClusterIP in order to be only accessible from inside the cluster, great.
Below I'll demonstrate the communication between front-end and back end.
I'm using the same yamls you provided, just changed the image to Nginx for example purposes, and since it's a http server I'm changing containerport to 80.
Question 1: What's is the ip/name to use in order to pass REACT_APP_API_URL to fronten correctly?
I added the ENV variable to the front deploy as requested, and I'll use it to demonstrate also. You must use the service name to curl, I used the short version because we are working in the same namespace. you can also use the full name: http://gbpd-api.default.svc.cluster.local:3008
Reproduction:
Create the yamls and applied them:
$ cat deploy-front.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: gbpd-front
spec:
selector:
matchLabels:
app: gbpd-api
tier: frontend
track: stable
replicas: 2
template:
metadata:
labels:
app: gbpd-api
tier: frontend
track: stable
spec:
containers:
- name: react
image: nginx
env:
- name: REACT_APP_API_URL
value: http://gbpd-api:3008
ports:
- containerPort: 80
resources:
limits:
memory: "150Mi"
requests:
memory: "100Mi"
imagePullPolicy: Always
$ cat service-front.yaml
cat: cat: No such file or directory
apiVersion: v1
kind: Service
metadata:
name: gbpd-front
spec:
selector:
app: gbpd-api
tier: frontend
ports:
- protocol: "TCP"
port: 3000
targetPort: 80
type: LoadBalancer
$ cat deploy-back.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: gbpd-api
spec:
selector:
matchLabels:
app: gbpd-api
tier: backend
track: stable
replicas: 3
template:
metadata:
labels:
app: gbpd-api
tier: backend
track: stable
spec:
containers:
- name: gbpd-api
image: nginx
ports:
- name: http
containerPort: 80
$ cat service-back.yaml
apiVersion: v1
kind: Service
metadata:
name: gbpd-api
spec:
selector:
app: gbpd-api
tier: backend
ports:
- protocol: "TCP"
port: 3008
targetPort: http
$ kubectl apply -f deploy-front.yaml
deployment.apps/gbpd-front created
$ kubectl apply -f service-front.yaml
service/gbpd-front created
$ kubectl apply -f deploy-back.yaml
deployment.apps/gbpd-api created
$ kubectl apply -f service-back.yaml
service/gbpd-api created
Remember, in Kubernetes the communication is designed to be made between services, because the pods are always recreated when there is a change in the deployment or when the pod fail.
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/gbpd-api-dc5b4b74b-kktb9 1/1 Running 0 41m
pod/gbpd-api-dc5b4b74b-mzpbg 1/1 Running 0 41m
pod/gbpd-api-dc5b4b74b-t6qxh 1/1 Running 0 41m
pod/gbpd-front-66b48f8b7c-4zstv 1/1 Running 0 30m
pod/gbpd-front-66b48f8b7c-h58ds 1/1 Running 0 31m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/gbpd-api ClusterIP 10.0.10.166 <none> 3008/TCP 40m
service/gbpd-front LoadBalancer 10.0.11.78 35.223.4.218 3000:32411/TCP 42m
The pods are the workers, and since they are replaceable by nature, we will connect to a frontend pod to simulate his behaviour and try to connect to the backend service (which is the network layer that will direct the traffic to one of the backend pods).
The nginx image does not come with curl preinstalled, so I will have to install it for demonstration purposes:
$ kubectl exec -it pod/gbpd-front-66b48f8b7c-4zstv -- /bin/bash
root#gbpd-front-66b48f8b7c-4zstv:/# apt update && apt install curl -y
done.
root#gbpd-front-66b48f8b7c-4zstv:/# curl gbpd-api:3008
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
Now let's try using the environment variable that was defined:
root#gbpd-front-66b48f8b7c-4zstv:/# printenv | grep REACT
REACT_APP_API_URL=http://gbpd-api:3008
root#gbpd-front-66b48f8b7c-4zstv:/# curl $REACT_APP_API_URL
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
Considerations:
Question 2: Why is curl localhost:3008 not answering?
Since all yamls are correctly described you must check if image: binomio/gbpd-back:dev is correctly serving on port 3008 as intended.
Since it's not a public image, I can't test it, so I'll give you troubleshooting steps:
just like we logged inside the front-end pod you will have to log into this backend-pod and test curl localhost:3008.
If it's based on a linux distro with apt-get, you can run the commands just like I did on my demo:
get the pod name from backend deploy (example: gbpd-api-6676c7695c-6bs5n)
run kubectl exec -it pod/<POD_NAME> -- /bin/bash
then run apt update && apt install curl -y
and test curl localhost:3008
if no answer run `apt update && apt install net-tools
and test netstat -nlpt, it will have to show you the output of the services running and the respective port, example:
root#gbpd-api-585df9cb4d-xr6nk:/# netstat -nlpt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1/nginx: master pro
If the pod does not return nothing even on this approach, you will have to check the code in the image.
Let me know if you need help after that!

How to provision persistent volume claim for software install in kubernetes

I am trying to provision PVC for Solr deployment in k8s and mount it as /opt/solr, which is default Solr installation directory. This way I plan to target both Solr installation and data under it on PVC. However, while storage gets provisioned just fine and statefulset gets created, my deployment doesn't work because /opt/solr ends up empty. What is a proper way to do it? Here my deployment.yaml:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: solr
labels:
app: solr
spec:
volumeClaimTemplates:
- metadata:
name: datadir
annotations:
volume.alpha.kubernetes.io/storage-class: slow
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
serviceName: solr-svc
replicas: 1
template:
metadata:
labels:
app: solr
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- solr-pool
topologyKey: "kubernetes.io/hostname"
terminationGracePeriodSeconds: 300
containers:
- name: solr
image: solr:6.5.1
imagePullPolicy: IfNotPresent
resources:
requests:
memory: 512M
cpu: 500m
ports:
- containerPort: 8983
name: solr-port
protocol: TCP
env:
- name: VERBOSE
value: "yes"
command:
- bash
- -c
- "exec /opt/solr/bin/solr start"
volumeMounts:
- name: solr-script
mountPath: /docker-entrypoint-initdb.d/
- name: datadir
mountPath: /opt/solr/
volumes:
- name: solr-script
configMap:
name: solr-configs
nodeSelector:
pool: solr-pool
Provisioned storage is empty by default and there might be a Deleting retain policy on provisioned storage be sure to check those configurations. You can also exec to your pod and examine mounted volume and see if it's working properly or not (permission issues, read only file system)
In may case there was a conflict with docker container configuration which used /opt/solr as a location for solr install and my attempt to mount separate PV under same location. Once this PV is mounted obviously I loose solr install. The fixes for this are:
create another docker image which uses separate location
change solr config to use different location for data
change PV volume location

Resources