Kubernetes pod wont start and get CrashLoopBackOff - reactjs

When i try to run kubectl apply -f frontend.yaml i get the following response from kubectl get pods and kubectl describe pods
// frontend.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: malvacom-frontend
labels:
app: malvacom-frontend
spec:
replicas: 1
selector:
matchLabels:
app: malvacom-frontend
template:
metadata:
labels:
app: malvacom-frontend
spec:
containers:
- name: malvacom-frontend
image: docker.io/forsrobin/malvacom_frontend
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
resources:
limits:
memory: "128Mi"
cpu: "200m"
livenessProbe:
httpGet:
path: /index.html
port: 80
initialDelaySeconds: 15
timeoutSeconds: 2
periodSeconds: 5
failureThreshold: 1
readinessProbe:
httpGet:
path: /index.html
port: 80
initialDelaySeconds: 15
periodSeconds: 5
failureThreshold: 1
command: [ "sleep" ]
args: [ "infinity" ]
and then the responses are
kubectl get pods
malvacom-frontend-8575c8548b-n959r 0/1 CrashLoopBackOff 5 (95s ago) 4m38s
kubectl describe pods
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 17s default-scheduler Successfully assigned default/malvacom-frontend-8575c8548b-n959r to shoot--p1622--malvacom-web-xdmoi2-z1-54776-bpjpw
Normal Pulled 15s (x2 over 16s) kubelet Container image "docker.io/forsrobin/malvacom_frontend" already present on machine
Normal Created 15s (x2 over 16s) kubelet Created container malvacom-frontend
Normal Started 15s (x2 over 16s) kubelet Started container malvacom-frontend
Warning BackOff 11s (x4 over 14s) kubelet Back-off restarting failed container
As I understran the pod starts but because it has no continues task to do kubernetes removes/stops the pod. I can run the image localy without any problem and if i for example use another image thenetworkchuck/nccoffee:pourover it works without any problems. This is my Dockerfile
FROM node:alpine as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY ./package.json /app/
RUN yarn --silent
COPY . /app
RUN yarn build
FROM nginx:alpine
COPY --from=build /app/build /usr/share/nginx/html
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx/nginx.conf /etc/nginx/conf.d
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

You're explicitly telling Kubernetes to not run its normal server
command: [ "sleep" ]
args: [ "infinity" ]
but then it should pass an HTTP health check
livenessProbe:
httpGet:
path: /index.html
port: 80
Since sleep infinity doesn't run an HTTP server, this probe will never pass, which causes your container to get killed and restarted.
You shouldn't need to do artificial things to "keep the container alive"; delete the command: and args: override. (The Dockerfile CMD is correct, but you get an identical CMD from the base nginx image and you don't need to repeat it.)

Related

Network IP from Docker not working for React + Vite.js therefore can't access k8s pod

I have a very simple react typescript application and using Vite for the first time to replace Webpack.
I have the following vite.config.js:
server: {
watch: {
usePolling: true,
},
open: false,
host: '0.0.0.0',
},
and created a Dockerfile with these instructions:
FROM node:latest
WORKDIR /usr/src/app
COPY package.json package-lock.json ./
COPY ./build-prod ./build-prod
COPY ./node_modules ./node_modules
RUN npm install husky -g --production
RUN npm install esbuild-linux-arm64 --production
COPY . .
EXPOSE 3000
CMD ["npm", "run", "dev"]
When I now run docker run -p 3000:3000 hello-world-app-frontend I can access my app with http://localhost:3000/ but opening the network address http://172.17.0.3:3000/ just loads an untitled window.
I think this is especially a problem for me as I want to create a basic Kubernetes config like this:
kind: Deployment
apiVersion: apps/v1
metadata:
name: hello-world-app-frontend
spec:
replicas: 2
selector:
matchLabels:
app: hello-world-app-frontend
template:
metadata:
labels:
app: hello-world-app-frontend
spec:
containers:
- name: hello-world-app-frontend
image: hello-world-app-frontend
imagePullPolicy: Never
ports:
- containerPort: 3000
restartPolicy: Always
kind: Service
apiVersion: v1
metadata:
name: hello-world-app-frontend
spec:
type: NodePort
ports:
- port: 3000
targetPort: 3000
protocol: TCP
nodePort: 31000
selector:
app: hello-world-app-frontend
But opening the IP address from my Pod returns nothing in my Chrome (f.e. http://10.106.213.128:3000/).
NAMESPACE NAME READY STATUS RESTARTS AGE
default pod/hello-world-app-frontend-77899b46d7-cc4td 1/1 Running 0 16h
default pod/hello-world-app-frontend-77899b46d7-vqtbz 1/1 Running 0 16h
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/hello-world-app-frontend NodePort 10.106.213.128 <none> 3000:31000/TCP 16h
Can somebody give me a few hints how I can access the React application from my k8s pod?

Dockerfile entrypoint in Kubernetes not executed

I've a docker container based ReactJS based app, a shell script is defined in docker image as the ENTRYPOINT, and I'm able to use docker run image-name successfully.
Now the task is to use this docker image for Kubernetes deployment using standard deployment.yaml file templates, something like following
# Deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
terminationGracePeriodSeconds: 120
containers:
- name: my-app
imagePullPolicy: Always
image: my-docker-image
command: ["/bin/bash"]
args: ["-c","./entrypoint.sh;while true; do echo hello; sleep 10;done"]
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
type: NodePort
selector:
app: my-app
ports:
- port: 3000
targetPort: 3000
protocol: TCP
nodePort: 31110
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 3000
when I do kubectl apply -f mydeployment.yaml, its creating required pod, but the entrypoint.sh script is not being executed upon creation of pod unlike direct running of docker image. Can someone please help in sharing what is wrong with above yaml file, am I missing or doing something incorrectly?
I also tried direclty call npm run start in command [] within yaml but no luck. I can enter in pod container using kubectl exec but I don't see react app running, I can manually execute entrypoint.sh and see the required output in browser.
Edit: Adding kubectl logs and describe output
logs: when I removed command/args from yaml and applied deploy.yaml, I get following logs as is, until starting the dev server line, there's nothing beyond that.
> myapp start /app
> react-scripts start
ℹ 「wds」: Project is running at http://x.x.x.x/
ℹ 「wds」: webpack output is served from
ℹ 「wds」: Content not from webpack is served from /app/public
ℹ 「wds」: 404s will fallback to /
Starting the development server...
Describe output
Name: my-view-85b597db55-72jr8
Namespace: default
Priority: 0
Node: my-node/x.x.x.x
Start Time: Fri, 16 Apr 2021 11:13:20 +0800
Labels: app=my-app
pod-template-hash=85b597db55
Annotations: cni.projectcalico.org/podIP: x.x.x.x/xx
cni.projectcalico.org/podIPs: x.x.x.x/xx
Status: Running
IP: x.x.x.x
IPs:
IP: x.x.x.x
Controlled By: ReplicaSet/my-view-container-85b597db55
Containers:
my-ui-container:
Container ID: containerd://671a1db809b7f583b2f3702e06cee3477ab1412d1e4aa8ac93106d8583f2c5b6
Image: my-docker-image
Image ID: my-docker-image#sha256:29f5fc74aa0302039c37d14201f5c85bc8278fbeb7d70daa2d867b7faa6d6770
Port: <none>
Host Port: <none>
State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 16 Apr 2021 11:13:41 +0800
Finished: Fri, 16 Apr 2021 11:13:43 +0800
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 16 Apr 2021 11:13:24 +0800
Finished: Fri, 16 Apr 2021 11:13:26 +0800
Ready: False
Restart Count: 2
Environment:
MY_ENVIRONMENT_NAME: TEST_ENV
MY_SERVICE_NAME: my-view-service
MY_SERVICE_MAIN_PORT: 3000
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9z8bw (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-9z8bw:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-9z8bw
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 32s default-scheduler Successfully assigned default/my-view-container-85b597db55-72jr8 to my-host
Normal Pulled 31s kubelet Successfully pulled image "my-docker-image" in 184.743641ms
Normal Pulled 28s kubelet Successfully pulled image "my-docker-image" in 252.382942ms
Normal Pulling 11s (x3 over 31s) kubelet Pulling image "my-docker-image"
Normal Pulled 11s kubelet Successfully pulled image "my-docker-image" in 211.2478ms
Normal Created 11s (x3 over 31s) kubelet Created container my-view-container
Normal Started 11s (x3 over 31s) kubelet Started container my-view-container
Warning BackOff 8s (x2 over 26s) kubelet Back-off restarting failed container
and my entrypoint.sh is
#!/bin/bash
( export REACT_APP_ENV_VAR=env_var_value;npm run start )
exec "$#"
When you write this in a pod description:
containers:
- name: my-app
imagePullPolicy: Always
image: my-docker-image
command: ["/bin/bash"]
args: ["-c","./entrypoint.sh;while true; do echo hello; sleep 10;done"]
The command argument overrides the container ENTRYPOINT. The above
is roughly equivalent to:
docker run --entrypoint /bin/bash my-docker-image ...args here...
If you want to use the ENTRYPOINT from the image, then just set args.
I figured out the solution finally, I included following options in yaml under spec section and removed command/args as mentioned by above comments. Hopefully it'll be useful to anyone facing this issue.
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
terminationGracePeriodSeconds: 120
containers:
- name: my-app
imagePullPolicy: Always
image: my-docker-image
stdin: true
tty: true

Kubernetes not waiting for reactjs to load

I have a GKE cluster. I am trying to deploy a reactjs frontend app but it seems like kubernetes is restarting the pod before it can totally load. I can run the container manually with docker and the app loads successfully but it takes a long time to load (10 minutes) I think because I am using the most basic servers in GCP.
I am trying to use probes for kubernetes to wait until app is app and running. I can not make it work. Is there any other way to tell kubernetes to wait for app startup? Thank you
this is my deploy file:
kind: Deployment
metadata:
labels:
app: livenessprobe
name: livenessprobe
spec:
replicas: 1
selector:
matchLabels:
app: livenessprobe
template:
metadata:
labels:
app: livenessprobe
spec:
containers:
- image: mychattanooga:v1
name: mychattanooga
livenessProbe:
httpGet:
path: /healthz
port: 3000
initialDelaySeconds: 99
periodSeconds: 30
resources: {}
The pod restart every 5 seconds or so and then I get crashLoopBackOff and restarts again .....
kubectl get events:
assigned default/mychattanooga-85f44599df-t6tnr to gke-cluster-2-default-pool-054176ff-wsp6
13m Normal Pulled pod/mychattanooga-85f44599df-t6tnr Container im
age "#####/mychattanooga#sha256:03dd2d6ef44add5c9165410874cee9155af645f88896e5d5cafb883265c
3d4c9" already present on machine
13m Normal Created pod/mychattanooga-85f44599df-t6tnr Created cont
ainer mychattanooga-sha256-1
13m Normal Started pod/mychattanooga-85f44599df-t6tnr Started cont
ainer mychattanooga-sha256-1
13m Warning BackOff pod/mychattanooga-85f44599df-t6tnr Back-off res
tarting failed container
kubectl describe pod:
Name: livenessprobe-5f9b566f76-dqk5s
Namespace: default
Priority: 0
Node: gke-cluster-2-default-pool-054176ff-wsp6/10.142.0.2
Start Time: Wed, 01 Jul 2020 04:01:22 -0400
Labels: app=livenessprobe
pod-template-hash=5f9b566f76
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container mychattanooga
Status: Running
IP: 10.36.0.58
IPs: <none>
Controlled By: ReplicaSet/livenessprobe-5f9b566f76
Containers:
mychattanooga:
Container ID: docker://cf33dafd0bb21fa7ddc86d96f7a0445d6d991e3c9f0327195db355f1b3aca526
Image: #####/mychattanooga:v1
Image ID: docker-pullable://gcr.io/operational-datastore/mychattanooga#sha256:03dd2d6ef44add5c9165410874
cee9155af645f88896e5d5cafb883265c3d4c9
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 01 Jul 2020 04:04:35 -0400
Finished: Wed, 01 Jul 2020 04:04:38 -0400
Ready: False
Restart Count: 5
Requests:
cpu: 100m
Liveness: http-get http://:3000/healthz delay=999s timeout=1s period=300s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zvncw (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-zvncw:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-zvncw
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m46s default-scheduler Successfully assi
gned default/livenessprobe-5f9b566f76-dqk5s to gke-cluster-2-default-pool-054176ff-wsp6
Normal Pulled 3m10s (x5 over 4m45s) kubelet, gke-cluster-2-default-pool-054176ff-wsp6 Container image "
#######/mychattanooga:v1" already present on machine
Normal Created 3m10s (x5 over 4m45s) kubelet, gke-cluster-2-default-pool-054176ff-wsp6 Created container
mychattanooga
Normal Started 3m10s (x5 over 4m45s) kubelet, gke-cluster-2-default-pool-054176ff-wsp6 Started container
mychattanooga
Warning BackOff 2m43s (x10 over 4m38s) kubelet, gke-cluster-2-default-pool-054176ff-wsp6 Back-off restarti
ng failed container
this is my Dcokerfile:
FROM node:latest
# Copy source code
COPY source/ /opt/app
# Change working directory
WORKDIR /opt/app
# install stuff
RUN npm install
# Expose API port to the outside
EXPOSE 3000
# Launch application
CMD ["npm", "start"]
From the docs here you can protect slow starting containers with startup probes.
Sometimes, you have to deal with legacy applications that might require an additional startup time on their first initialization. In such cases, it can be tricky to set up liveness probe parameters without compromising the fast response to deadlocks that motivated such a probe. The trick is to set up a startup probe with the same command, HTTP or TCP check, with a failureThreshold * periodSeconds long enough to cover the worse case startup time
startupProbe:
httpGet:
path: /healthz
port: liveness-port
failureThreshold: 30
periodSeconds: 10
I got it working by downgrading to react-scripts 3.4.0
The crashLoopBackOff was caused by webpackDevServer Issue
After downgrading, I added these fields in deployment container
stdin: true
tty: true
I hope you got it working.
Add this to your deployment file
containers:
- image: mychattanooga:v1
name: mychattanooga
readinessProbe:
tcpSocket:
port: 3000
initialDelaySeconds: 20
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 3
livenessProbe:
tcpSocket:
port: 3000
initialDelaySeconds: 15
periodSeconds: 20

How to connect front to back in k8s cluster internal (connection refused)

Error while trying to connect React frontend web to nodejs express api server into kubernetes cluster.
Can navigate in browser to http:localhost:3000 and web site is ok.
But can't navigate to http:localhost:3008 as expected (should not be exposed)
My goal is to pass REACT_APP_API_URL environment variable to frontend in order to set axios baseURL and be able to establish communication between front and it's api server.
deploy-front.yml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: gbpd-front
spec:
selector:
matchLabels:
app: gbpd-api
tier: frontend
track: stable
replicas: 2
template:
metadata:
labels:
app: gbpd-api
tier: frontend
track: stable
spec:
containers:
- name: react
image: binomio/gbpd-front:k8s-3
ports:
- containerPort: 3000
resources:
limits:
memory: "150Mi"
requests:
memory: "100Mi"
imagePullPolicy: Always
service-front.yaml
apiVersion: v1
kind: Service
metadata:
name: gbpd-front
spec:
selector:
app: gbpd-api
tier: frontend
ports:
- protocol: "TCP"
port: 3000
targetPort: 3000
type: LoadBalancer
Deploy-back.yaml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: gbpd-api
spec:
selector:
matchLabels:
app: gbpd-api
tier: backend
track: stable
replicas: 3 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: gbpd-api
tier: backend
track: stable
spec:
containers:
- name: gbpd-api
image: binomio/gbpd-back:dev
ports:
- name: http
containerPort: 3008
service-back.yaml
apiVersion: v1
kind: Service
metadata:
name: gbpd-api
spec:
selector:
app: gbpd-api
tier: backend
ports:
- protocol: "TCP"
port: 3008
targetPort: http
I tried many combinations, also tried adding "LoadBalancer" to backservice but nothing...
I can connect perfecto to localhost:3000 and use frontend but frontend can't connect to backend service.
Question 1: What's is the ip/name to use in order to pass REACT_APP_API_URL to fronten correctly?
Question 2: Why is curl localhost:3008 not answering?
After 2 days trying almost everything in k8s official docs... can't figure out what's happening here, so any help will be much appreciated.
kubectl describe svc gbpd-api
Response:
kubectl describe svc gbpd-api
Name: gbpd-api
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"gbpd-api","namespace":"default"},"spec":{"ports":[{"port":3008,"p...
Selector: app=gbpd-api,tier=backend
Type: LoadBalancer
IP: 10.107.145.227
LoadBalancer Ingress: localhost
Port: <unset> 3008/TCP
TargetPort: http/TCP
NodePort: <unset> 31464/TCP
Endpoints: 10.1.1.48:3008,10.1.1.49:3008,10.1.1.50:3008
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I tested your environment, and it worked when using a Nginx image, let's review the environment:
The front-deployment is correctly described.
The front-service exposes it as loadbalancer, meaning your frontend is accessible from outside, perfect.
The back deployment is also correctly described.
The backend-service stays with as ClusterIP in order to be only accessible from inside the cluster, great.
Below I'll demonstrate the communication between front-end and back end.
I'm using the same yamls you provided, just changed the image to Nginx for example purposes, and since it's a http server I'm changing containerport to 80.
Question 1: What's is the ip/name to use in order to pass REACT_APP_API_URL to fronten correctly?
I added the ENV variable to the front deploy as requested, and I'll use it to demonstrate also. You must use the service name to curl, I used the short version because we are working in the same namespace. you can also use the full name: http://gbpd-api.default.svc.cluster.local:3008
Reproduction:
Create the yamls and applied them:
$ cat deploy-front.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: gbpd-front
spec:
selector:
matchLabels:
app: gbpd-api
tier: frontend
track: stable
replicas: 2
template:
metadata:
labels:
app: gbpd-api
tier: frontend
track: stable
spec:
containers:
- name: react
image: nginx
env:
- name: REACT_APP_API_URL
value: http://gbpd-api:3008
ports:
- containerPort: 80
resources:
limits:
memory: "150Mi"
requests:
memory: "100Mi"
imagePullPolicy: Always
$ cat service-front.yaml
cat: cat: No such file or directory
apiVersion: v1
kind: Service
metadata:
name: gbpd-front
spec:
selector:
app: gbpd-api
tier: frontend
ports:
- protocol: "TCP"
port: 3000
targetPort: 80
type: LoadBalancer
$ cat deploy-back.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: gbpd-api
spec:
selector:
matchLabels:
app: gbpd-api
tier: backend
track: stable
replicas: 3
template:
metadata:
labels:
app: gbpd-api
tier: backend
track: stable
spec:
containers:
- name: gbpd-api
image: nginx
ports:
- name: http
containerPort: 80
$ cat service-back.yaml
apiVersion: v1
kind: Service
metadata:
name: gbpd-api
spec:
selector:
app: gbpd-api
tier: backend
ports:
- protocol: "TCP"
port: 3008
targetPort: http
$ kubectl apply -f deploy-front.yaml
deployment.apps/gbpd-front created
$ kubectl apply -f service-front.yaml
service/gbpd-front created
$ kubectl apply -f deploy-back.yaml
deployment.apps/gbpd-api created
$ kubectl apply -f service-back.yaml
service/gbpd-api created
Remember, in Kubernetes the communication is designed to be made between services, because the pods are always recreated when there is a change in the deployment or when the pod fail.
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/gbpd-api-dc5b4b74b-kktb9 1/1 Running 0 41m
pod/gbpd-api-dc5b4b74b-mzpbg 1/1 Running 0 41m
pod/gbpd-api-dc5b4b74b-t6qxh 1/1 Running 0 41m
pod/gbpd-front-66b48f8b7c-4zstv 1/1 Running 0 30m
pod/gbpd-front-66b48f8b7c-h58ds 1/1 Running 0 31m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/gbpd-api ClusterIP 10.0.10.166 <none> 3008/TCP 40m
service/gbpd-front LoadBalancer 10.0.11.78 35.223.4.218 3000:32411/TCP 42m
The pods are the workers, and since they are replaceable by nature, we will connect to a frontend pod to simulate his behaviour and try to connect to the backend service (which is the network layer that will direct the traffic to one of the backend pods).
The nginx image does not come with curl preinstalled, so I will have to install it for demonstration purposes:
$ kubectl exec -it pod/gbpd-front-66b48f8b7c-4zstv -- /bin/bash
root#gbpd-front-66b48f8b7c-4zstv:/# apt update && apt install curl -y
done.
root#gbpd-front-66b48f8b7c-4zstv:/# curl gbpd-api:3008
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
Now let's try using the environment variable that was defined:
root#gbpd-front-66b48f8b7c-4zstv:/# printenv | grep REACT
REACT_APP_API_URL=http://gbpd-api:3008
root#gbpd-front-66b48f8b7c-4zstv:/# curl $REACT_APP_API_URL
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
Considerations:
Question 2: Why is curl localhost:3008 not answering?
Since all yamls are correctly described you must check if image: binomio/gbpd-back:dev is correctly serving on port 3008 as intended.
Since it's not a public image, I can't test it, so I'll give you troubleshooting steps:
just like we logged inside the front-end pod you will have to log into this backend-pod and test curl localhost:3008.
If it's based on a linux distro with apt-get, you can run the commands just like I did on my demo:
get the pod name from backend deploy (example: gbpd-api-6676c7695c-6bs5n)
run kubectl exec -it pod/<POD_NAME> -- /bin/bash
then run apt update && apt install curl -y
and test curl localhost:3008
if no answer run `apt update && apt install net-tools
and test netstat -nlpt, it will have to show you the output of the services running and the respective port, example:
root#gbpd-api-585df9cb4d-xr6nk:/# netstat -nlpt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1/nginx: master pro
If the pod does not return nothing even on this approach, you will have to check the code in the image.
Let me know if you need help after that!

Cannot access exposed Dockerized React app on Kubernetes

I'm trying to deploy my Dockerized React app to Kubernetes. I believe I've dockerized it correctly, but i'm having trouble accessing the exposed pod.
I don't have experience in Docker or Kubernetes, so any help would be appreciated.
My React app is just static files (from npm run build) being served from Tomcat.
My Dockerfile is below. In summary, I put my app in the Tomcat folder and expose port 8080.
FROM private-docker-registry.com/repo/tomcat:latest
EXPOSE 8080:8080
# Copy build directory to Tomcat webapps directory
RUN mkdir -p /tomcat/webapps/app
COPY /build/sample-app /tomcat/webapps/app
# Create a symbolic link to ROOT -- this way app starts at root path
(localhost:8080)
RUN ln -s /tomcat/webapps/app /tomcat/webapps/ROOT
# Start Tomcat
ENTRYPOINT ["catalina.sh", "run"]
I build and pushed the Docker image to the Private Docker Registry.
I verified that container runs correctly by running it like this:
docker run -p 8080:8080 private-docker-registry.com/repo/sample-app:latest
Then, if I go to localhost:8080, I see the homepage of my React app.
Now, the trouble I'm having is deploying to Kubernetes and accessing the app externally.
Here's my deployment.yaml file:
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: sample-app
namespace: dev
labels:
app: sample-app
spec:
replicas: 1
selector:
matchLabels:
app: sample-app
template:
metadata:
labels:
app: sample-app
spec:
containers:
- name: sample-app
image: private-docker-registry.com/repo/sample-app:latest
ports:
- containerPort: 8080
protocol: TCP
nodeSelector:
TNTRole: luxkube
---
kind: Service
apiVersion: v1
metadata:
name: sample-app
labels:
app: sample-app
spec:
selector:
app: sample-app
type: NodePort
ports:
- port: 80
targetPort: 8080
protocol: TCP
I created the deployment and service by running
kubectl --namespace=dev create -f deployment.yaml
Output of 'describe deployment'
Name: sample-app
Namespace: dev
CreationTimestamp: Sat, 21 Jul 2018 12:27:30 -0400
Labels: app=sample-app
Annotations: deployment.kubernetes.io/revision=1
Selector: app=sample-app
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=sample-app
Containers:
sample-app:
Image: private-docker-registry.com/repo/sample-app:latest
Port: 8080/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: sample-app-bb6f59b9 (1/1 replicas created)
Events: <none>
Output of 'describe service'
Name: sample-app
Namespace: fab-dev
Labels: app=sample-app
Annotations: <none>
Selector: app=sample-app
Type: NodePort
IP: 10.96.29.199
Port: <unset> 80/TCP
TargetPort: 8080/TCP
NodePort: <unset> 34604/TCP
Endpoints: 192.168.138.145:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Now I don't know which IP and port I should be using to access the app.
I have tried every combination but none has loaded my app. I believe the port should be 80, so if I just have the IP, i shuold be able to go to the browser and access the React app by going to http://.
Does anyone have suggestions?
The short version is that the Service is listening on the same TCP/IP port on every Node in your cluster (34604) as is shown in the output of describe service:
NodePort: <unset> 34604
If you wish to access the application through a "nice" URL, you'll want a load balancer that can translate the hostname into the in-cluster IP and port combination. That's what an Ingress controller is designed to do, but it isn't the only way -- changing the Service to be type: LoadBalancer will do that for you, if you're running in a cloud environment where Kubernetes knows how to programmatically create load balancers for you.
I believe you found the answer by now :), I landed here as I was facing this issue. Solved for self, hope this helps everyone.
Here's what can help:
Deploy your app (say: react-app).
Run below command:
kubectl expose deployment <workload> --namespace=app-dev --name=react-app --type=NodePort --port=3000 output: service/notesui-app exposed
Publish the service port as 3000, Target Port 3000, Node Port (auto selected 32250)
kubectl get svc react-app --namespace=notesui-dev
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
react-app NodePort 10.23.22.55 <none> 3000:32250/TCP 48m
Yaml: (sample)
apiVersion: v1
kind: Service
name: react-app
namespace: app-dev
spec:
selector: <workload>
ports:
- nodePort: 32250
port: 3000
protocol: TCP
targetPort: 3000
type: NodePort
status: {}
Access the app on browser:
http://<Host>:32250/index
is your node ip where pod is running.
If you have app running in multiple nodes (scaled). It is a NodePort setting on every node.
App can be accessed:
http://<Host1>:32250/index
http://<Host2>:32250/index

Resources