I'm trying to use Google Cloud Debugger on Cloud Run with Django. I red this document.
https://cloud.google.com/debugger/docs/setup/python
What I did.
I turned on Debugger in google cloud.
Install google-python-cloud-debugger.
I created source-context.json same directory with models.py
I add this code in manage.py
try:
import googleclouddebugger
googleclouddebugger.enable()
except ImportError:
pass
I update container of Google Cloud Run. How ever I cant find any application in Debugger
I imported my source code from GitHub. I can see my code in Debugger. However, I couldn't check break point in Debugger page.
How to debug Django on Clod Run? Please help me.
Update
I did this 2 step.
Add logger Cloud Debugger Agent rights to the service account from IAM.
Connect GitHub repository with Google Cloud Source
Cloud Debugger works on local environment. However it doesn't work in Cloud Run.
This picture has only local application. I can't find Cloud Run application.
This is my yaml file. (I'm using Cloud Run as full managed mode)
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: my_app
namespace: '135253772466'
selfLink: /apis/serving.knative.dev/v1/namespaces/135253772466/services/my_app
uid: 61b4ac55-4aab-4d33-801d-d21b0d116ea4
resourceVersion: AAWmjubgiTg
generation: 176
creationTimestamp: '2020-04-14T12:38:39.484473Z'
labels:
cloud.googleapis.com/location: asia-northeast1
annotations:
run.googleapis.com/client-name: gcloud
serving.knative.dev/creator: 135253772466#cloudbuild.gserviceaccount.com
serving.knative.dev/lastModifier: 135253772466#cloudbuild.gserviceaccount.com
client.knative.dev/user-image: gcr.io/my_project/my_app
run.googleapis.com/client-version: 291.0.0
spec:
traffic:
- percent: 100
latestRevision: true
template:
metadata:
name: my_app-00176-wud
annotations:
run.googleapis.com/client-name: gcloud
client.knative.dev/user-image: gcr.io/my_project/my_app
run.googleapis.com/client-version: 291.0.0
autoscaling.knative.dev/maxScale: '1000'
spec:
timeoutSeconds: 900
serviceAccountName: 135253772466-compute#developer.gserviceaccount.com
containerConcurrency: 80
containers:
- image: gcr.io/my_project/my_app
ports:
- containerPort: 8080
env:
- name: CLOUD_RUN_HOST
value: my_app-u3ljntrlma-an.a.run.app
resources:
limits:
cpu: 1000m
memory: 2048Mi
status:
conditions:
- type: Ready
status: 'True'
lastTransitionTime: '2020-05-26T15:39:32.595Z'
- type: ConfigurationsReady
status: 'True'
lastTransitionTime: '2020-05-26T15:39:25.640Z'
- type: RoutesReady
status: 'True'
lastTransitionTime: '2020-05-26T15:39:32.595Z'
observedGeneration: 176
traffic:
- revisionName: my_app-00176-wud
percent: 100
latestRevision: true
latestReadyRevisionName: my_app-00176-wud
latestCreatedRevisionName: my_app-00176-wud
address:
url: https://my_app-u3ljntrlma-an.a.run.app
url: https://my_app-u3ljntrlma-an.a.run.app
Related
I have a simple NFS server (followed instructions here) connected to a Kubernetes (v1.24.2) cluster as a storage class. When a new PVC is created, it creates a PV as expected with a new directory on the NFS server.
The NFS provider was deployed as instructed here.
My issue is that containers don't seem to be able to perform all the functions they expect to when interacting with the NFS server. For example:
A PVC and PV are created with the following yml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mssql-data
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
This creates a directory on the NFS server as expected.
Then this deployment is created to use the PVC:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mssql-deployment
spec:
replicas: 1
selector:
matchLabels:
app: mssql
template:
metadata:
labels:
app: mssql
spec:
terminationGracePeriodSeconds: 30
hostname: mssqlinst
securityContext:
fsGroup: 10001
containers:
- name: mssql
image: mcr.microsoft.com/mssql/server:2019-latest
ports:
- containerPort: 1433
env:
- name: MSSQL_PID
value: "Developer"
- name: ACCEPT_EULA
value: "Y"
- name: SA_PASSWORD
value: "Password123"
volumeMounts:
- name: mssqldb
mountPath: /var/opt/mssql
volumes:
- name: mssqldb
persistentVolumeClaim:
claimName: mssql-data
The server comes up and responds to requests but does so with the error:
[S0002][823] com.microsoft.sqlserver.jdbc.SQLServerException: The operating system returned error 1117(The request could not be performed because of an I/O device error.) to SQL Server during a read at offset 0x0000000009a000 in file '/var/opt/mssql/data/master.mdf'. Additional messages in the SQL Server error log and operating system error log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.
My /etc/exports file has the following contents:
/srv *(rw,no_subtree_check,no_root_squash)
When the SQL container starts, it doesn't undergo any container restarts but the SQL service within the container appears to get into some sort of restart loop until a connection is attempted and then it throws the error and appears to stop.
Is there something I'm missing in the /etc/exports file? I tried variations with sync, async, and insecure but can't seem to get past the SQL error.
I gather from the error that this has something to do with the container's ability to read/write from/to the disk. Am I in the right ballpark?
The config that ended up working was:
/srv *(rw,no_root_squash,insecure,sync,no_subtree_check)
This was after a reinstall of the cluster. No significant changes elsewhere but still seems like there may have been more to the issue than this one config.
Thanks for any help on this.
I'm running a Tanzu kubernetes cluster, brand new in a dev environment. I'm trying to install MS SQL Server 2019 and am hitting a wall with this error once I apply the manifest.
The SQLserver pod fails with this:
ltkc-workers-mpqdb-556696d6f6-rhpsw
Warning FailedMount 50s kubelet, sqltkc-workers-mpqdb-556696d6f6-rhpsw Unable to attach or mount volumes: unmounted volumes=[mssql-persistent-storage], unattached volumes=[default-token-qzt5k mssql-persistent-storage]: timed out waiting for the condition
Warning FailedAttachVolume 45s (x9 over 2m53s) attachdetach-controller AttachVolume.Attach failed for volume "pvc-697e8f96-a23b-4255-9b19-fa04aeed98ee" : rpc error: code = Internal desc = observed Error: "ServerFaultCode: NotAuthenticated" is set on the volume "fbc91ad5-b62e-4bec-8132-4f2d1c5160f0-697e8f96-a23b-4255-9b19-fa04aeed98ee" on virtualmachine "sqltkc-workers-mpqdb-556696d6f6-rhpsw"
The pv and pvc all are bound:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-697e8f96-a23b-4255-9b19-fa04aeed98ee 10Gi RWO Delete Bound default/mssql-data-claim pstore-high 67m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/mssql-data-claim Bound pvc-697e8f96-a23b-4255-9b19-fa04aeed98ee 10Gi RWO pstore-high 67m
The deployment manifest is just what I downloaded from the web from various other tutorials:
apiVersion: v1
kind: Service
metadata:
name: mssql-deployment
spec:
selector:
app: mssql
ports:
- protocol: TCP
port: 1433
targetPort: 1433
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mssql-deployment
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 0
selector:
matchLabels:
app: mssql
template:
metadata:
labels:
app: mssql
spec:
terminationGracePeriodSeconds: 10
securityContext:
fsGroup: 1000
restartPolicy: Always
containers:
- name: mssql
resources:
requests:
memory: 8000Mi
image: mcr.microsoft.com/mssql/server:2019-latest
ports:
- containerPort: 1433
env:
- name: MSSQL_PID
value: "Developer"
- name: ACCEPT_EULA
value: "Y"
- name: SA_PASSWORD
value: VMware123!
volumeMounts:
- name: mssql-persistent-storage
mountPath: /var/opt/mssql
volumes:
- name: mssql-persistent-storage
persistentVolumeClaim:
claimName: mssql-data-claim
Here is the pvc yaml:
kind: PersistentVolumeClaim
metadata:
name: mssql-data-claim
spec:
accessModes:
- ReadWriteOnce
# storageClassName: vsan-default-storage-policy
storageClassName: pstore-high
resources:
requests:
storage: 10Gi
The storage class exists. I have tried this with both the default vSAN and other storage classes and always hit the same volume authentication issue.
I've searched high and low, can't find any related docs. Was hoping to see if someone knew more.
Thanks so much!!
Thanks again for the help, our team was able to fix this. We found out that our vCenter root password had expired. Once we reset the password our persistent volumes were able to mount to the containers without any errors. Highly suggest if you are running Tanzu to make sure your vCenter is fully updated.
I've a simle React JS application and it's using a environment variable(REACT_APP_BACKEND_SERVER_URL) defined in .env file. Now I'm trying to deploy this application to minikube using Kubernetes.
This is my deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-ui
spec:
replicas: 1
selector:
matchLabels:
app: test-ui
template:
metadata:
name: test-ui-pod
labels:
app: test-ui
spec:
containers:
- name: test-ui
image: test-ui:1.0.2
ports:
- containerPort: 80
env:
- name: "REACT_APP_BACKEND_SERVER_URL"
value: "http://127.0.0.1:59058"
When I run the application, it's working but REACT_APP_BACKEND_SERVER_URL is giving the value which I defined in .env file. Not the one I'm overriding. Can someone help me with this please? How to override the env variable using Kubernetes deployment?
After starting the app with your deployment YAML and checking for the environment variables I see the environment variables for that environment variable.
REACT_APP_BACKEND_SERVER_URL=http://127.0.0.1:59058
you can check that by doing an kubectl exec -it <pod-name> -- sh and running env command.
So you can see that REACT_APP_BACKEND_SERVER_URL is there in the environment variables. It's available for your application to use. I suspect that you may need to understand better from the React app side on how to use the .env file.
Error while trying to connect React frontend web to nodejs express api server into kubernetes cluster.
Can navigate in browser to http:localhost:3000 and web site is ok.
But can't navigate to http:localhost:3008 as expected (should not be exposed)
My goal is to pass REACT_APP_API_URL environment variable to frontend in order to set axios baseURL and be able to establish communication between front and it's api server.
deploy-front.yml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: gbpd-front
spec:
selector:
matchLabels:
app: gbpd-api
tier: frontend
track: stable
replicas: 2
template:
metadata:
labels:
app: gbpd-api
tier: frontend
track: stable
spec:
containers:
- name: react
image: binomio/gbpd-front:k8s-3
ports:
- containerPort: 3000
resources:
limits:
memory: "150Mi"
requests:
memory: "100Mi"
imagePullPolicy: Always
service-front.yaml
apiVersion: v1
kind: Service
metadata:
name: gbpd-front
spec:
selector:
app: gbpd-api
tier: frontend
ports:
- protocol: "TCP"
port: 3000
targetPort: 3000
type: LoadBalancer
Deploy-back.yaml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: gbpd-api
spec:
selector:
matchLabels:
app: gbpd-api
tier: backend
track: stable
replicas: 3 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: gbpd-api
tier: backend
track: stable
spec:
containers:
- name: gbpd-api
image: binomio/gbpd-back:dev
ports:
- name: http
containerPort: 3008
service-back.yaml
apiVersion: v1
kind: Service
metadata:
name: gbpd-api
spec:
selector:
app: gbpd-api
tier: backend
ports:
- protocol: "TCP"
port: 3008
targetPort: http
I tried many combinations, also tried adding "LoadBalancer" to backservice but nothing...
I can connect perfecto to localhost:3000 and use frontend but frontend can't connect to backend service.
Question 1: What's is the ip/name to use in order to pass REACT_APP_API_URL to fronten correctly?
Question 2: Why is curl localhost:3008 not answering?
After 2 days trying almost everything in k8s official docs... can't figure out what's happening here, so any help will be much appreciated.
kubectl describe svc gbpd-api
Response:
kubectl describe svc gbpd-api
Name: gbpd-api
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"gbpd-api","namespace":"default"},"spec":{"ports":[{"port":3008,"p...
Selector: app=gbpd-api,tier=backend
Type: LoadBalancer
IP: 10.107.145.227
LoadBalancer Ingress: localhost
Port: <unset> 3008/TCP
TargetPort: http/TCP
NodePort: <unset> 31464/TCP
Endpoints: 10.1.1.48:3008,10.1.1.49:3008,10.1.1.50:3008
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I tested your environment, and it worked when using a Nginx image, let's review the environment:
The front-deployment is correctly described.
The front-service exposes it as loadbalancer, meaning your frontend is accessible from outside, perfect.
The back deployment is also correctly described.
The backend-service stays with as ClusterIP in order to be only accessible from inside the cluster, great.
Below I'll demonstrate the communication between front-end and back end.
I'm using the same yamls you provided, just changed the image to Nginx for example purposes, and since it's a http server I'm changing containerport to 80.
Question 1: What's is the ip/name to use in order to pass REACT_APP_API_URL to fronten correctly?
I added the ENV variable to the front deploy as requested, and I'll use it to demonstrate also. You must use the service name to curl, I used the short version because we are working in the same namespace. you can also use the full name: http://gbpd-api.default.svc.cluster.local:3008
Reproduction:
Create the yamls and applied them:
$ cat deploy-front.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: gbpd-front
spec:
selector:
matchLabels:
app: gbpd-api
tier: frontend
track: stable
replicas: 2
template:
metadata:
labels:
app: gbpd-api
tier: frontend
track: stable
spec:
containers:
- name: react
image: nginx
env:
- name: REACT_APP_API_URL
value: http://gbpd-api:3008
ports:
- containerPort: 80
resources:
limits:
memory: "150Mi"
requests:
memory: "100Mi"
imagePullPolicy: Always
$ cat service-front.yaml
cat: cat: No such file or directory
apiVersion: v1
kind: Service
metadata:
name: gbpd-front
spec:
selector:
app: gbpd-api
tier: frontend
ports:
- protocol: "TCP"
port: 3000
targetPort: 80
type: LoadBalancer
$ cat deploy-back.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: gbpd-api
spec:
selector:
matchLabels:
app: gbpd-api
tier: backend
track: stable
replicas: 3
template:
metadata:
labels:
app: gbpd-api
tier: backend
track: stable
spec:
containers:
- name: gbpd-api
image: nginx
ports:
- name: http
containerPort: 80
$ cat service-back.yaml
apiVersion: v1
kind: Service
metadata:
name: gbpd-api
spec:
selector:
app: gbpd-api
tier: backend
ports:
- protocol: "TCP"
port: 3008
targetPort: http
$ kubectl apply -f deploy-front.yaml
deployment.apps/gbpd-front created
$ kubectl apply -f service-front.yaml
service/gbpd-front created
$ kubectl apply -f deploy-back.yaml
deployment.apps/gbpd-api created
$ kubectl apply -f service-back.yaml
service/gbpd-api created
Remember, in Kubernetes the communication is designed to be made between services, because the pods are always recreated when there is a change in the deployment or when the pod fail.
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/gbpd-api-dc5b4b74b-kktb9 1/1 Running 0 41m
pod/gbpd-api-dc5b4b74b-mzpbg 1/1 Running 0 41m
pod/gbpd-api-dc5b4b74b-t6qxh 1/1 Running 0 41m
pod/gbpd-front-66b48f8b7c-4zstv 1/1 Running 0 30m
pod/gbpd-front-66b48f8b7c-h58ds 1/1 Running 0 31m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/gbpd-api ClusterIP 10.0.10.166 <none> 3008/TCP 40m
service/gbpd-front LoadBalancer 10.0.11.78 35.223.4.218 3000:32411/TCP 42m
The pods are the workers, and since they are replaceable by nature, we will connect to a frontend pod to simulate his behaviour and try to connect to the backend service (which is the network layer that will direct the traffic to one of the backend pods).
The nginx image does not come with curl preinstalled, so I will have to install it for demonstration purposes:
$ kubectl exec -it pod/gbpd-front-66b48f8b7c-4zstv -- /bin/bash
root#gbpd-front-66b48f8b7c-4zstv:/# apt update && apt install curl -y
done.
root#gbpd-front-66b48f8b7c-4zstv:/# curl gbpd-api:3008
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
Now let's try using the environment variable that was defined:
root#gbpd-front-66b48f8b7c-4zstv:/# printenv | grep REACT
REACT_APP_API_URL=http://gbpd-api:3008
root#gbpd-front-66b48f8b7c-4zstv:/# curl $REACT_APP_API_URL
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
Considerations:
Question 2: Why is curl localhost:3008 not answering?
Since all yamls are correctly described you must check if image: binomio/gbpd-back:dev is correctly serving on port 3008 as intended.
Since it's not a public image, I can't test it, so I'll give you troubleshooting steps:
just like we logged inside the front-end pod you will have to log into this backend-pod and test curl localhost:3008.
If it's based on a linux distro with apt-get, you can run the commands just like I did on my demo:
get the pod name from backend deploy (example: gbpd-api-6676c7695c-6bs5n)
run kubectl exec -it pod/<POD_NAME> -- /bin/bash
then run apt update && apt install curl -y
and test curl localhost:3008
if no answer run `apt update && apt install net-tools
and test netstat -nlpt, it will have to show you the output of the services running and the respective port, example:
root#gbpd-api-585df9cb4d-xr6nk:/# netstat -nlpt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1/nginx: master pro
If the pod does not return nothing even on this approach, you will have to check the code in the image.
Let me know if you need help after that!
I am trying to provision PVC for Solr deployment in k8s and mount it as /opt/solr, which is default Solr installation directory. This way I plan to target both Solr installation and data under it on PVC. However, while storage gets provisioned just fine and statefulset gets created, my deployment doesn't work because /opt/solr ends up empty. What is a proper way to do it? Here my deployment.yaml:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: solr
labels:
app: solr
spec:
volumeClaimTemplates:
- metadata:
name: datadir
annotations:
volume.alpha.kubernetes.io/storage-class: slow
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
serviceName: solr-svc
replicas: 1
template:
metadata:
labels:
app: solr
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- solr-pool
topologyKey: "kubernetes.io/hostname"
terminationGracePeriodSeconds: 300
containers:
- name: solr
image: solr:6.5.1
imagePullPolicy: IfNotPresent
resources:
requests:
memory: 512M
cpu: 500m
ports:
- containerPort: 8983
name: solr-port
protocol: TCP
env:
- name: VERBOSE
value: "yes"
command:
- bash
- -c
- "exec /opt/solr/bin/solr start"
volumeMounts:
- name: solr-script
mountPath: /docker-entrypoint-initdb.d/
- name: datadir
mountPath: /opt/solr/
volumes:
- name: solr-script
configMap:
name: solr-configs
nodeSelector:
pool: solr-pool
Provisioned storage is empty by default and there might be a Deleting retain policy on provisioned storage be sure to check those configurations. You can also exec to your pod and examine mounted volume and see if it's working properly or not (permission issues, read only file system)
In may case there was a conflict with docker container configuration which used /opt/solr as a location for solr install and my attempt to mount separate PV under same location. Once this PV is mounted obviously I loose solr install. The fixes for this are:
create another docker image which uses separate location
change solr config to use different location for data
change PV volume location