I have a GKE cluster. I am trying to deploy a reactjs frontend app but it seems like kubernetes is restarting the pod before it can totally load. I can run the container manually with docker and the app loads successfully but it takes a long time to load (10 minutes) I think because I am using the most basic servers in GCP.
I am trying to use probes for kubernetes to wait until app is app and running. I can not make it work. Is there any other way to tell kubernetes to wait for app startup? Thank you
this is my deploy file:
kind: Deployment
metadata:
labels:
app: livenessprobe
name: livenessprobe
spec:
replicas: 1
selector:
matchLabels:
app: livenessprobe
template:
metadata:
labels:
app: livenessprobe
spec:
containers:
- image: mychattanooga:v1
name: mychattanooga
livenessProbe:
httpGet:
path: /healthz
port: 3000
initialDelaySeconds: 99
periodSeconds: 30
resources: {}
The pod restart every 5 seconds or so and then I get crashLoopBackOff and restarts again .....
kubectl get events:
assigned default/mychattanooga-85f44599df-t6tnr to gke-cluster-2-default-pool-054176ff-wsp6
13m Normal Pulled pod/mychattanooga-85f44599df-t6tnr Container im
age "#####/mychattanooga#sha256:03dd2d6ef44add5c9165410874cee9155af645f88896e5d5cafb883265c
3d4c9" already present on machine
13m Normal Created pod/mychattanooga-85f44599df-t6tnr Created cont
ainer mychattanooga-sha256-1
13m Normal Started pod/mychattanooga-85f44599df-t6tnr Started cont
ainer mychattanooga-sha256-1
13m Warning BackOff pod/mychattanooga-85f44599df-t6tnr Back-off res
tarting failed container
kubectl describe pod:
Name: livenessprobe-5f9b566f76-dqk5s
Namespace: default
Priority: 0
Node: gke-cluster-2-default-pool-054176ff-wsp6/10.142.0.2
Start Time: Wed, 01 Jul 2020 04:01:22 -0400
Labels: app=livenessprobe
pod-template-hash=5f9b566f76
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container mychattanooga
Status: Running
IP: 10.36.0.58
IPs: <none>
Controlled By: ReplicaSet/livenessprobe-5f9b566f76
Containers:
mychattanooga:
Container ID: docker://cf33dafd0bb21fa7ddc86d96f7a0445d6d991e3c9f0327195db355f1b3aca526
Image: #####/mychattanooga:v1
Image ID: docker-pullable://gcr.io/operational-datastore/mychattanooga#sha256:03dd2d6ef44add5c9165410874
cee9155af645f88896e5d5cafb883265c3d4c9
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 01 Jul 2020 04:04:35 -0400
Finished: Wed, 01 Jul 2020 04:04:38 -0400
Ready: False
Restart Count: 5
Requests:
cpu: 100m
Liveness: http-get http://:3000/healthz delay=999s timeout=1s period=300s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zvncw (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-zvncw:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-zvncw
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m46s default-scheduler Successfully assi
gned default/livenessprobe-5f9b566f76-dqk5s to gke-cluster-2-default-pool-054176ff-wsp6
Normal Pulled 3m10s (x5 over 4m45s) kubelet, gke-cluster-2-default-pool-054176ff-wsp6 Container image "
#######/mychattanooga:v1" already present on machine
Normal Created 3m10s (x5 over 4m45s) kubelet, gke-cluster-2-default-pool-054176ff-wsp6 Created container
mychattanooga
Normal Started 3m10s (x5 over 4m45s) kubelet, gke-cluster-2-default-pool-054176ff-wsp6 Started container
mychattanooga
Warning BackOff 2m43s (x10 over 4m38s) kubelet, gke-cluster-2-default-pool-054176ff-wsp6 Back-off restarti
ng failed container
this is my Dcokerfile:
FROM node:latest
# Copy source code
COPY source/ /opt/app
# Change working directory
WORKDIR /opt/app
# install stuff
RUN npm install
# Expose API port to the outside
EXPOSE 3000
# Launch application
CMD ["npm", "start"]
From the docs here you can protect slow starting containers with startup probes.
Sometimes, you have to deal with legacy applications that might require an additional startup time on their first initialization. In such cases, it can be tricky to set up liveness probe parameters without compromising the fast response to deadlocks that motivated such a probe. The trick is to set up a startup probe with the same command, HTTP or TCP check, with a failureThreshold * periodSeconds long enough to cover the worse case startup time
startupProbe:
httpGet:
path: /healthz
port: liveness-port
failureThreshold: 30
periodSeconds: 10
I got it working by downgrading to react-scripts 3.4.0
The crashLoopBackOff was caused by webpackDevServer Issue
After downgrading, I added these fields in deployment container
stdin: true
tty: true
I hope you got it working.
Add this to your deployment file
containers:
- image: mychattanooga:v1
name: mychattanooga
readinessProbe:
tcpSocket:
port: 3000
initialDelaySeconds: 20
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 3
livenessProbe:
tcpSocket:
port: 3000
initialDelaySeconds: 15
periodSeconds: 20
Related
I'm following Les Jackson's tutorial to microservices and got stuck at 05:30:00 while creating a deployment for a ms sql server. I've written the deployment file just as shown on the yt video:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mssql-depl
spec:
replicas: 1
selector:
matchLabels:
app: mssql
template:
metadata:
labels:
app: mssql
spec:
containers:
- name: mssql
image: mcr.microsoft.com/mssql/server:2017-latest
ports:
- containerPort: 1433
env:
- name: MSSQL_PID
value: "Express"
- name: ACCEPT_EULA
value: "Y"
- name: SA_PASSWORD
valueFrom:
secretKeyRef:
name: mssql
key: SA_PASSWORD
volumeMounts:
- mountPath: /var/opt/mssql/data
name: mssqldb
volumes:
- name: mssqldb
persistentVolumeClaim:
claimName: mssql-claim
---
apiVersion: v1
kind: Service
metadata:
name: mssql-clusterip-srv
spec:
type: ClusterIP
selector:
app: mssql
ports:
- name: mssql
protocol: TCP
port: 1433 # this is default port for mssql
targetPort: 1433
---
apiVersion: v1
kind: Service
metadata:
name: mssql-loadbalancer
spec:
type: LoadBalancer
selector:
app: mssql
ports:
- protocol: TCP
port: 1433 # this is default port for mssql
targetPort: 1433
The persistent volume claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mssql-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 200Mi
But when I apply this deployment, the pod ends up with ImagePullBackOff status:
commands-depl-688f77b9c6-vln5v 1/1 Running 0 2d21h
mssql-depl-5cd6d7d486-m8nw6 0/1 ImagePullBackOff 0 4m54s
platforms-depl-6b6cf9b478-ktlhf 1/1 Running 0 2d21h
kubectl describe pod
Name: mssql-depl-5cd6d7d486-nrrkn
Namespace: default
Priority: 0
Node: docker-desktop/192.168.65.4
Start Time: Thu, 28 Jul 2022 12:09:34 +0200
Labels: app=mssql
pod-template-hash=5cd6d7d486
Annotations: <none>
Status: Pending
IP: 10.1.0.27
IPs:
IP: 10.1.0.27
Controlled By: ReplicaSet/mssql-depl-5cd6d7d486
Containers:
mssql:
Container ID:
Image: mcr.microsoft.com/mssql/server:2017-latest
Image ID:
Port: 1433/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
MSSQL_PID: Express
ACCEPT_EULA: Y
SA_PASSWORD: <set to the key 'SA_PASSWORD' in secret 'mssql'> Optional: false
Mounts:
/var/opt/mssql/data from mssqldb (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube- api-access-xqzks (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mssqldb:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mssql-claim
ReadOnly: false
kube-api-access-xqzks:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not- ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m42s default-scheduler Successfully assigned default/mssql-depl-5cd6d7d486-nrrkn to docker-desktop
Warning Failed 102s kubelet Failed to pull image "mcr.microsoft.com/mssql/server:2017-latest": rpc error: code = Unknown desc = context deadline exceeded
Warning Failed 102s kubelet Error: ErrImagePull
Normal BackOff 102s kubelet Back-off pulling image "mcr.microsoft.com/mssql/server:2017-latest"
Warning Failed 102s kubelet Error: ImagePullBackOff
Normal Pulling 87s (x2 over 3m41s) kubelet Pulling image "mcr.microsoft.com/mssql/server:2017-latest"
In the events it shows
"rpc error: code = Unknown desc = context deadline exceeded"
But it doesn't tell me anything and resources on troubleshooting this error don't include such error.
I'm using kubernetes on docker locally.
I've researched that this issue can happen when pulling the image from a private registry, but this is public one, right here. I copy pasted the image path to be sure, I tried with different ms sql version, but to no avail.
Can someone be so kind and show me the right direction I should go / what should I try to get this to work? It worked just fine on the video :(
I fixed it by manually pulling the image via docker pull mcr.microsoft.com/mssql/server:2017-latest and then deleting and re-applying the deployment.
I my case, I needed to pull the image "to minikube" using minikube ssh docker pull <the_image>
Then I can apply my deployment without errors.
Source: https://github.com/kubernetes/minikube/issues/14806
When i try to run kubectl apply -f frontend.yaml i get the following response from kubectl get pods and kubectl describe pods
// frontend.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: malvacom-frontend
labels:
app: malvacom-frontend
spec:
replicas: 1
selector:
matchLabels:
app: malvacom-frontend
template:
metadata:
labels:
app: malvacom-frontend
spec:
containers:
- name: malvacom-frontend
image: docker.io/forsrobin/malvacom_frontend
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
resources:
limits:
memory: "128Mi"
cpu: "200m"
livenessProbe:
httpGet:
path: /index.html
port: 80
initialDelaySeconds: 15
timeoutSeconds: 2
periodSeconds: 5
failureThreshold: 1
readinessProbe:
httpGet:
path: /index.html
port: 80
initialDelaySeconds: 15
periodSeconds: 5
failureThreshold: 1
command: [ "sleep" ]
args: [ "infinity" ]
and then the responses are
kubectl get pods
malvacom-frontend-8575c8548b-n959r 0/1 CrashLoopBackOff 5 (95s ago) 4m38s
kubectl describe pods
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 17s default-scheduler Successfully assigned default/malvacom-frontend-8575c8548b-n959r to shoot--p1622--malvacom-web-xdmoi2-z1-54776-bpjpw
Normal Pulled 15s (x2 over 16s) kubelet Container image "docker.io/forsrobin/malvacom_frontend" already present on machine
Normal Created 15s (x2 over 16s) kubelet Created container malvacom-frontend
Normal Started 15s (x2 over 16s) kubelet Started container malvacom-frontend
Warning BackOff 11s (x4 over 14s) kubelet Back-off restarting failed container
As I understran the pod starts but because it has no continues task to do kubernetes removes/stops the pod. I can run the image localy without any problem and if i for example use another image thenetworkchuck/nccoffee:pourover it works without any problems. This is my Dockerfile
FROM node:alpine as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY ./package.json /app/
RUN yarn --silent
COPY . /app
RUN yarn build
FROM nginx:alpine
COPY --from=build /app/build /usr/share/nginx/html
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx/nginx.conf /etc/nginx/conf.d
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
You're explicitly telling Kubernetes to not run its normal server
command: [ "sleep" ]
args: [ "infinity" ]
but then it should pass an HTTP health check
livenessProbe:
httpGet:
path: /index.html
port: 80
Since sleep infinity doesn't run an HTTP server, this probe will never pass, which causes your container to get killed and restarted.
You shouldn't need to do artificial things to "keep the container alive"; delete the command: and args: override. (The Dockerfile CMD is correct, but you get an identical CMD from the base nginx image and you don't need to repeat it.)
I've a docker container based ReactJS based app, a shell script is defined in docker image as the ENTRYPOINT, and I'm able to use docker run image-name successfully.
Now the task is to use this docker image for Kubernetes deployment using standard deployment.yaml file templates, something like following
# Deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
terminationGracePeriodSeconds: 120
containers:
- name: my-app
imagePullPolicy: Always
image: my-docker-image
command: ["/bin/bash"]
args: ["-c","./entrypoint.sh;while true; do echo hello; sleep 10;done"]
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
type: NodePort
selector:
app: my-app
ports:
- port: 3000
targetPort: 3000
protocol: TCP
nodePort: 31110
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 3000
when I do kubectl apply -f mydeployment.yaml, its creating required pod, but the entrypoint.sh script is not being executed upon creation of pod unlike direct running of docker image. Can someone please help in sharing what is wrong with above yaml file, am I missing or doing something incorrectly?
I also tried direclty call npm run start in command [] within yaml but no luck. I can enter in pod container using kubectl exec but I don't see react app running, I can manually execute entrypoint.sh and see the required output in browser.
Edit: Adding kubectl logs and describe output
logs: when I removed command/args from yaml and applied deploy.yaml, I get following logs as is, until starting the dev server line, there's nothing beyond that.
> myapp start /app
> react-scripts start
ℹ 「wds」: Project is running at http://x.x.x.x/
ℹ 「wds」: webpack output is served from
ℹ 「wds」: Content not from webpack is served from /app/public
ℹ 「wds」: 404s will fallback to /
Starting the development server...
Describe output
Name: my-view-85b597db55-72jr8
Namespace: default
Priority: 0
Node: my-node/x.x.x.x
Start Time: Fri, 16 Apr 2021 11:13:20 +0800
Labels: app=my-app
pod-template-hash=85b597db55
Annotations: cni.projectcalico.org/podIP: x.x.x.x/xx
cni.projectcalico.org/podIPs: x.x.x.x/xx
Status: Running
IP: x.x.x.x
IPs:
IP: x.x.x.x
Controlled By: ReplicaSet/my-view-container-85b597db55
Containers:
my-ui-container:
Container ID: containerd://671a1db809b7f583b2f3702e06cee3477ab1412d1e4aa8ac93106d8583f2c5b6
Image: my-docker-image
Image ID: my-docker-image#sha256:29f5fc74aa0302039c37d14201f5c85bc8278fbeb7d70daa2d867b7faa6d6770
Port: <none>
Host Port: <none>
State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 16 Apr 2021 11:13:41 +0800
Finished: Fri, 16 Apr 2021 11:13:43 +0800
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 16 Apr 2021 11:13:24 +0800
Finished: Fri, 16 Apr 2021 11:13:26 +0800
Ready: False
Restart Count: 2
Environment:
MY_ENVIRONMENT_NAME: TEST_ENV
MY_SERVICE_NAME: my-view-service
MY_SERVICE_MAIN_PORT: 3000
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9z8bw (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-9z8bw:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-9z8bw
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 32s default-scheduler Successfully assigned default/my-view-container-85b597db55-72jr8 to my-host
Normal Pulled 31s kubelet Successfully pulled image "my-docker-image" in 184.743641ms
Normal Pulled 28s kubelet Successfully pulled image "my-docker-image" in 252.382942ms
Normal Pulling 11s (x3 over 31s) kubelet Pulling image "my-docker-image"
Normal Pulled 11s kubelet Successfully pulled image "my-docker-image" in 211.2478ms
Normal Created 11s (x3 over 31s) kubelet Created container my-view-container
Normal Started 11s (x3 over 31s) kubelet Started container my-view-container
Warning BackOff 8s (x2 over 26s) kubelet Back-off restarting failed container
and my entrypoint.sh is
#!/bin/bash
( export REACT_APP_ENV_VAR=env_var_value;npm run start )
exec "$#"
When you write this in a pod description:
containers:
- name: my-app
imagePullPolicy: Always
image: my-docker-image
command: ["/bin/bash"]
args: ["-c","./entrypoint.sh;while true; do echo hello; sleep 10;done"]
The command argument overrides the container ENTRYPOINT. The above
is roughly equivalent to:
docker run --entrypoint /bin/bash my-docker-image ...args here...
If you want to use the ENTRYPOINT from the image, then just set args.
I figured out the solution finally, I included following options in yaml under spec section and removed command/args as mentioned by above comments. Hopefully it'll be useful to anyone facing this issue.
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
terminationGracePeriodSeconds: 120
containers:
- name: my-app
imagePullPolicy: Always
image: my-docker-image
stdin: true
tty: true
I am deploying a SQL Server 2019 to Kubernetes with the following manifest:
apiVersion : apps/v1
kind: Deployment
metadata:
name: sql
spec:
selector:
matchLabels:
app: 'sql'
template:
metadata:
labels:
app: sql
spec:
hostname: sql-dev
securityContext:
fsGroup: 10001
initContainers:
- name: volume-permissions
image: busybox
command: ["sh", "-c", "chown -R 10001:0 /var/opt/mssql"]
volumeMounts:
- mountPath: "/var/opt/mssql"
name: mssqldb
containers:
- name: sql
image: localhost:32000/sql:dev-latest
env:
- name: MSSQL_SA_PASSWORD
valueFrom:
secretKeyRef:
name: mssql
key: SA_PASSWORD
- name: ACCEPT_EULA
value: "Y"
ports:
- containerPort: 1433
resources:
limits:
memory: 2Gi
cpu: 1
volumeMounts:
- name: mssqldb
mountPath: /var/opt/mssql
volumes:
- name: mssqldb
persistentVolumeClaim:
claimName: sqldev-pvc
---
apiVersion: v1
kind: Service
metadata:
name: sql-svc
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 1433
targetPort: 1433
nodePort: 31113
selector:
app: sql
And this is the pv/pvc manifest:
apiVersion: v1
kind: PersistentVolume
metadata:
name: sqldev-pv
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
storageClassName: sql
hostPath:
path: /usr/sql
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: sqldev-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: sql
resources:
requests:
storage: 1Gi
If the deployment is not present on the Cluster yet, the deployment itself works and the server is available.
The next deployment fails with the following message:
2021-01-20 12:02:34.98 Server Error: 17113, Severity: 16, State:
2021-01-20 12:02:34.98 Server Error 5(Access is denied.) occurred while opening file '/var/opt/mssql/data/master.mdf' to obtain
configuration information at startup. An invalid startup option might
have caused the error. Verify your startup options, and correct or
remove them if necessary.
Doing another deployment or simply restarting it with kubectl rollout restart deployment/sql comes up fine, while the next one fails again.
The pattern is a consistent good - bad - good - bad - ...
Plese explain why this is happening and how I can resolve this.
Update: Apparently one instance of mssql exclusively locks the database files - which makes total sense. You don't want 2 instances of brain going haywire on your sole instance of childhood memories.
So what I think is happening is:
Instance A exists and is up and running
Instance B deployment starts and wants to access the same volume as A
Only when B is created, A is being terminated with a grace period of 30 seconds
B is trying to access the mdf, while it is still being excklusively locked by and to A currently being terminated.
I have a crude solution involving a sleep 30 bash script before initializing mssql inside the pod, but right now I want to investigate, if there is a more elegant solution.
My first approach to solve this was to delay the boot up time of mssql until the previous pod was terminated using a bash loop:
echo "Waiting 35 seconds grace period."
for i in {0..35}
do
sleep 1
echo "$i seconds waited"
done
While this technically solved the problem, it isn't very elegant. If the grace period of the pod is changed to someting > 35 seconds, this will need to be changed too.
Changing the deployment strategy from its implicit default RollingUpdate to Recreate, did the trick to. The effect is, that the previous pod is being terminated before the new one is spun up.
apiVersion : apps/v1
kind: Deployment
metadata:
name: sql
spec:
selector:
matchLabels:
app: 'sql'
strategy:
type: Recreate
Documentation: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#recreate-deployment
I'm trying to deploy my Dockerized React app to Kubernetes. I believe I've dockerized it correctly, but i'm having trouble accessing the exposed pod.
I don't have experience in Docker or Kubernetes, so any help would be appreciated.
My React app is just static files (from npm run build) being served from Tomcat.
My Dockerfile is below. In summary, I put my app in the Tomcat folder and expose port 8080.
FROM private-docker-registry.com/repo/tomcat:latest
EXPOSE 8080:8080
# Copy build directory to Tomcat webapps directory
RUN mkdir -p /tomcat/webapps/app
COPY /build/sample-app /tomcat/webapps/app
# Create a symbolic link to ROOT -- this way app starts at root path
(localhost:8080)
RUN ln -s /tomcat/webapps/app /tomcat/webapps/ROOT
# Start Tomcat
ENTRYPOINT ["catalina.sh", "run"]
I build and pushed the Docker image to the Private Docker Registry.
I verified that container runs correctly by running it like this:
docker run -p 8080:8080 private-docker-registry.com/repo/sample-app:latest
Then, if I go to localhost:8080, I see the homepage of my React app.
Now, the trouble I'm having is deploying to Kubernetes and accessing the app externally.
Here's my deployment.yaml file:
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: sample-app
namespace: dev
labels:
app: sample-app
spec:
replicas: 1
selector:
matchLabels:
app: sample-app
template:
metadata:
labels:
app: sample-app
spec:
containers:
- name: sample-app
image: private-docker-registry.com/repo/sample-app:latest
ports:
- containerPort: 8080
protocol: TCP
nodeSelector:
TNTRole: luxkube
---
kind: Service
apiVersion: v1
metadata:
name: sample-app
labels:
app: sample-app
spec:
selector:
app: sample-app
type: NodePort
ports:
- port: 80
targetPort: 8080
protocol: TCP
I created the deployment and service by running
kubectl --namespace=dev create -f deployment.yaml
Output of 'describe deployment'
Name: sample-app
Namespace: dev
CreationTimestamp: Sat, 21 Jul 2018 12:27:30 -0400
Labels: app=sample-app
Annotations: deployment.kubernetes.io/revision=1
Selector: app=sample-app
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=sample-app
Containers:
sample-app:
Image: private-docker-registry.com/repo/sample-app:latest
Port: 8080/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: sample-app-bb6f59b9 (1/1 replicas created)
Events: <none>
Output of 'describe service'
Name: sample-app
Namespace: fab-dev
Labels: app=sample-app
Annotations: <none>
Selector: app=sample-app
Type: NodePort
IP: 10.96.29.199
Port: <unset> 80/TCP
TargetPort: 8080/TCP
NodePort: <unset> 34604/TCP
Endpoints: 192.168.138.145:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Now I don't know which IP and port I should be using to access the app.
I have tried every combination but none has loaded my app. I believe the port should be 80, so if I just have the IP, i shuold be able to go to the browser and access the React app by going to http://.
Does anyone have suggestions?
The short version is that the Service is listening on the same TCP/IP port on every Node in your cluster (34604) as is shown in the output of describe service:
NodePort: <unset> 34604
If you wish to access the application through a "nice" URL, you'll want a load balancer that can translate the hostname into the in-cluster IP and port combination. That's what an Ingress controller is designed to do, but it isn't the only way -- changing the Service to be type: LoadBalancer will do that for you, if you're running in a cloud environment where Kubernetes knows how to programmatically create load balancers for you.
I believe you found the answer by now :), I landed here as I was facing this issue. Solved for self, hope this helps everyone.
Here's what can help:
Deploy your app (say: react-app).
Run below command:
kubectl expose deployment <workload> --namespace=app-dev --name=react-app --type=NodePort --port=3000 output: service/notesui-app exposed
Publish the service port as 3000, Target Port 3000, Node Port (auto selected 32250)
kubectl get svc react-app --namespace=notesui-dev
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
react-app NodePort 10.23.22.55 <none> 3000:32250/TCP 48m
Yaml: (sample)
apiVersion: v1
kind: Service
name: react-app
namespace: app-dev
spec:
selector: <workload>
ports:
- nodePort: 32250
port: 3000
protocol: TCP
targetPort: 3000
type: NodePort
status: {}
Access the app on browser:
http://<Host>:32250/index
is your node ip where pod is running.
If you have app running in multiple nodes (scaled). It is a NodePort setting on every node.
App can be accessed:
http://<Host1>:32250/index
http://<Host2>:32250/index