bosh deploy error Error 190014 - vsphere

my bosh version is 1.3232.0
my platform is vsphere, i search the google and bosh site, it may relate to the cloud-config opt-in. but i have no idea anymore.
I create own mongodb release, when Upload the manifest, it throws Error 190014
Director task 163
Started preparing deployment > Preparing deployment. Failed: Deployment manifest should not contain cloud config properties: ["compilation", "networks", "resource_pools"] (00:00:00)
Error 190014: Deployment manifest should not contain cloud config properties: ["compilation", "networks", "resource_pools"]
my manifest is :
---
name: mongodb3
director_uuid: d3df0341-4aeb-4706-940b-6f4681090af8
releases:
- name: mongodb
version: latest
compilation:
workers: 1
reuse_compilation_vms: false
network: default
cloud_properties:
cpu: 4
datacenters:
- clusters:
- cf_z2:
resource_pool: mongodb
name: cf_z2
disk: 20480
ram: 4096
update:
canaries: 1
canary_watch_time: 15000-30000
update_watch_time: 15000-30000
max_in_flight: 1
networks:
- name: default
type: manual
subnets:
- cloud_properties:
name: VM Network
range: 10.62.90.133/25
gateway: 10.62.90.129
static:
- 10.62.90.140
reserved:
- 10.62.90.130 - 10.62.90.139
- 10.62.90.151 - 10.62.90.254
dns:
- 10.254.174.10
- 10.104.128.235
resource_pools:
- cloud_properties:
cpu: 2
datacenters:
- clusters:
- cf_z2:
resource_pool: mongodb
name: cf
disk: 10480
ram: 4096
name: mongodb3
network: default
stemcell:
name: bosh-vsphere-esxi-ubuntu-trusty-go_agent
version: latest
jobs:
- name: mongodb3
instances: 1
templates:
- {name: mongodb3, release: mongodb3}
persistent_disk: 10_240
resource_pools: mongodb3
networks:
- name: default

solved, these parts should be put in an single file, and deploy to bosh

Related

(MongoSocketOpenException): Exception opening socket | Unknown host: wisdb1

I tried to connect MongoDb replica-set (3 nodes) with docker.
This is my docker-compose file, i renamed all services from e.g. "mongo1" to "mongoa1" because i have a second app with the same config file:
version: "3.8"
services:
mongoa1:
image: mongo:4
container_name: mongoa1
command: ["--replSet", "my-replica-set", "--bind_ip_all", "--port", "30001"]
volumes:
- ./data/mongoa-1:/data/db
ports:
- 30001:30001
healthcheck:
test: test $$(echo "rs.initiate({_id:'my-replica-set',members:[{_id:0,host:\"mongoa1:30001\"},{_id:1,host:\"mongoa2:30002\"},{_id:2,host:\"mongoa3:30003\"}]}).ok || rs.status().ok" | mongo --port 30001 --quiet) -eq 1
interval: 10s
start_period: 30s
mongoa2:
image: mongo:4
container_name: mongoa2
command: ["--replSet", "my-replica-set", "--bind_ip_all", "--port", "30002"]
volumes:
- ./data/mongoa-2:/data/db
ports:
- 30002:30002
mongoa3:
image: mongo:4
container_name: mongoa3
command: ["--replSet", "my-replica-set", "--bind_ip_all", "--port", "30003"]
volumes:
- ./data/mongoa-3:/data/db
ports:
- 30003:30003
Container are running.
❯ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fb1fcab13804 mongo:4 "docker-entrypoint.s…" 3 minutes ago Up 3 minutes 27017/tcp, 0.0.0.0:30003->30003/tcp mongoa3
72f8cfe217a5 mongo:4 "docker-entrypoint.s…" 3 minutes ago Up 3 minutes (healthy) 27017/tcp, 0.0.0.0:30001->30001/tcp mongoa1
2a61246f5d17 mongo:4 "docker-entrypoint.s…" 3 minutes ago Up 3 minutes 27017/tcp, 0.0.0.0:30002->30002/tcp mongoa2
I want to open Studio T3 but i get the following error:
Db path: mongodb://mongoa1:30001,mongoa2:30002,mongoa3:30003/app?replicaSet=my-replica-set
Connection failed.
SERVER [mongoa1:30001] (Type: UNKNOWN)
|_/ Connection error (MongoSocketOpenException): Exception opening socket
|____/ Unknown host: mongoa1
SERVER [mongoa2:30002] (Type: UNKNOWN)
|_/ Connection error (MongoSocketOpenException): Exception opening socket
|____/ Unknown host: mongoa2
SERVER [mongoa3:30003] (Type: UNKNOWN)
|_/ Connection error (MongoSocketOpenException): Exception opening socket
|____/ Unknown host: mongoa3
Details:
Timed out after 30000 ms while waiting for a server that matches com.mongodb.client.internal.MongoClientDelegate$1#4d84ad57. Client view of cluster state is {type=REPLICA_SET, servers=[{address=mongoa1:30001, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.UnknownHostException: mongoa1}}, {address=mongoa2:30002, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.UnknownHostException: mongoa2}}, {address=mongoa3:30003, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.UnknownHostException: mongoa3}}]
I don't understand what's wrong. When i rename back to "mongoa1" to "mongo1" it works. But i always have to delete the other docker app and i don't want. Whats wrong in my config?

Zeppelin k8s: change interpreter pod configuration

I've configured my zeppelin on kubernetes using:
apiVersion: apps/v1
kind: Deployment
metadata:
name: zeppelin
labels: [...]
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: zeppelin
app.kubernetes.io/instance: zeppelin
template:
metadata:
labels:
app.kubernetes.io/name: zeppelin
app.kubernetes.io/instance: zeppelin
spec:
serviceAccountName: zeppelin
containers:
- name: zeppelin
image: "apache/zeppelin:0.9.0"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8080
protocol: TCP
[...]
env:
- name: ZEPPELIN_PORT
value: "8080"
- name: ZEPPELIN_K8S_CONTAINER_IMAGE
value: apache/zeppelin:0.9.0
- name: ZEPPELIN_RUN_MODE
value: k8s
- name: ZEPPELIN_K8S_SPARK_CONTAINER_IMAGE
value: docker-registry.default.svc:5000/ra-iot-dev/spark:2.4.5
When a new paragraph job is performed, zeppelin since is running in k8s mode creates a pod:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
spark-ghbvld 0/1 Completed 0 9m -----<<<<<<<
spark-master-0 1/1 Running 0 38m
spark-worker-0 1/1 Running 0 38m
zeppelin-6cc658d59f-gk2lp 1/1 Running 0 24m
Shortly, this container is engaged to first copy spark home folder from ZEPPELIN_K8S_SPARK_CONTAINER_IMAGE into main container and then executes interpreter.
Problem arises here:
I'm getting this error message on created pod:
Interpreter launch command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Dfile.encoding=UTF-8 -Dlog4j.configuration=file:///zeppelin/conf/log4j.properties -Dzeppelin.log.file='/zeppelin/logs/zeppelin-interpreter-spark-shared_process--spark-ghbvld.log' -Xms1024m -Xmx2048m -XX:MaxPermSize=512m -cp ":/zeppelin/interpreter/spark/dep/*:/zeppelin/interpreter/spark/*::/zeppelin/interpreter/zeppelin-interpreter-shaded-0.9.0-preview1.jar" org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer zeppelin-6cc658d59f-gk2lp.ra-iot-dev.svc 36161 "spark-shared_process" 12321:12321
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/zeppelin/interpreter/spark/dep/zeppelin-spark-dependencies-0.9.0-preview1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/zeppelin/interpreter/spark/spark-interpreter-0.9.0-preview1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
WARN [2020-06-05 06:35:05,694] ({main} ZeppelinConfiguration.java[create]:159) - Failed to load configuration, proceeding with a default
INFO [2020-06-05 06:35:05,745] ({main} ZeppelinConfiguration.java[create]:171) - Server Host: 0.0.0.0
Exception in thread "main" java.lang.NumberFormatException: For input string: "tcp://172.30.203.33:80"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:580)
at java.lang.Integer.parseInt(Integer.java:615)
at org.apache.zeppelin.conf.ZeppelinConfiguration.getInt(ZeppelinConfiguration.java:248)
at org.apache.zeppelin.conf.ZeppelinConfiguration.getInt(ZeppelinConfiguration.java:243)
at org.apache.zeppelin.conf.ZeppelinConfiguration.getServerPort(ZeppelinConfiguration.java:327)
at org.apache.zeppelin.conf.ZeppelinConfiguration.create(ZeppelinConfiguration.java:173)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer.<init>(RemoteInterpreterServer.java:144)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer.<init>(RemoteInterpreterServer.java:152)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer.main(RemoteInterpreterServer.java:321)
As you can see, main problem is:
Exception in thread "main" java.lang.NumberFormatException: For input string: "tcp://172.30.203.33:80"
I've tried to add zeppelin.server.port property on interpreter configuration using zeppelin web frontend, navigating to Interpreters -> Spark Interpreter -> add property, see bellow:
However, the problem keeps going.
Any ideas about how to override zeppelin.server.port, or ZEPPELIN_PORT on generated interpreter pod?
I also dump interpreter pod manifest created by zeppelin:
$ kubectl get pods -o=yaml spark-ghbvld
apiVersion: v1
kind: Pod
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"app":"spark-ghbvld","interpreterGroupId":"spark-shared_process","interpreterSettingName":"spark"},"name":"spark-ghbvld","namespace":"ra-iot-dev"},"spec":{"automountServiceAccountToken":true,"containers":[{"command":["sh","-c","$(ZEPPELIN_HOME)/bin/interpreter.sh -d $(ZEPPELIN_HOME)/interpreter/spark -r 12321:12321 -c zeppelin-6cc658d59f-gk2lp.ra-iot-dev.svc -p 36161 -i spark-shared_process -l /tmp/local-repo -g spark"],"env":[{"name":"PYSPARK_PYTHON","value":"python"},{"name":"PYSPARK_DRIVER_PYTHON","value":"python"},{"name":"SERVICE_DOMAIN","value":null},{"name":"ZEPPELIN_HOME","value":"/zeppelin"},{"name":"INTERPRETER_GROUP_ID","value":"spark-shared_process"},{"name":"SPARK_HOME","value":null}],"image":"apache/zeppelin:0.9.0","lifecycle":{"preStop":{"exec":{"command":["sh","-c","ps -ef | grep org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer | grep -v grep | awk '{print $2}' | xargs kill"]}}},"name":"spark","volumeMounts":[{"mountPath":"/spark","name":"spark-home"}]}],"initContainers":[{"command":["sh","-c","cp -r /opt/spark/* /spark/"],"image":"docker-registry.default.svc:5000/ra-iot-dev/spark:2.4.5","name":"spark-home-init","volumeMounts":[{"mountPath":"/spark","name":"spark-home"}]}],"restartPolicy":"Never","terminationGracePeriodSeconds":30,"volumes":[{"emptyDir":{},"name":"spark-home"}]}}
openshift.io/scc: anyuid
creationTimestamp: "2020-06-05T06:34:36Z"
labels:
app: spark-ghbvld
interpreterGroupId: spark-shared_process
interpreterSettingName: spark
name: spark-ghbvld
namespace: ra-iot-dev
resourceVersion: "224863130"
selfLink: /api/v1/namespaces/ra-iot-dev/pods/spark-ghbvld
uid: a04a0d70-a6f6-11ea-9e39-0050569f5f65
spec:
automountServiceAccountToken: true
containers:
- command:
- sh
- -c
- $(ZEPPELIN_HOME)/bin/interpreter.sh -d $(ZEPPELIN_HOME)/interpreter/spark -r
12321:12321 -c zeppelin-6cc658d59f-gk2lp.ra-iot-dev.svc -p 36161 -i spark-shared_process
-l /tmp/local-repo -g spark
env:
- name: PYSPARK_PYTHON
value: python
- name: PYSPARK_DRIVER_PYTHON
value: python
- name: SERVICE_DOMAIN
- name: ZEPPELIN_HOME
value: /zeppelin
- name: INTERPRETER_GROUP_ID
value: spark-shared_process
- name: SPARK_HOME
image: apache/zeppelin:0.9.0
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- sh
- -c
- ps -ef | grep org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer
| grep -v grep | awk '{print $2}' | xargs kill
name: spark
resources: {}
securityContext:
capabilities:
drop:
- MKNOD
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /spark
name: spark-home
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-n4lpw
readOnly: true
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: default-dockercfg-qs7sj
initContainers:
- command:
- sh
- -c
- cp -r /opt/spark/* /spark/
image: docker-registry.default.svc:5000/ra-iot-dev/spark:2.4.5
imagePullPolicy: IfNotPresent
name: spark-home-init
resources: {}
securityContext:
capabilities:
drop:
- MKNOD
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /spark
name: spark-home
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-n4lpw
readOnly: true
nodeName: node2.si-origin-cluster.t-systems.es
nodeSelector:
region: primary
priority: 0
restartPolicy: Never
schedulerName: default-scheduler
securityContext:
seLinuxOptions:
level: s0:c30,c0
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: spark-home
- name: default-token-n4lpw
secret:
defaultMode: 420
secretName: default-token-n4lpw
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2020-06-05T06:35:03Z"
reason: PodCompleted
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2020-06-05T06:35:07Z"
reason: PodCompleted
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: null
reason: PodCompleted
status: "False"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2020-06-05T06:34:37Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://8c3977241a20be1600180525e4f8b737c8dc5954b6dc0826a7fc703ff6020a70
image: docker.io/apache/zeppelin:0.9.0
imageID: docker-pullable://docker.io/apache/zeppelin#sha256:0691909f6884319d366f5d3a5add8802738d6240a83b2e53e980caeb6c658092
lastState: {}
name: spark
ready: false
restartCount: 0
state:
terminated:
containerID: docker://8c3977241a20be1600180525e4f8b737c8dc5954b6dc0826a7fc703ff6020a70
exitCode: 0
finishedAt: "2020-06-05T06:35:05Z"
reason: Completed
startedAt: "2020-06-05T06:35:05Z"
hostIP: 10.49.160.21
initContainerStatuses:
- containerID: docker://34701d70eec47367a928dc382326014c76fc49c95be92562e68911f36b4c6242
image: docker-registry.default.svc:5000/ra-iot-dev/spark:2.4.5
imageID: docker-pullable://docker-registry.default.svc:5000/ra-iot-dev/spark#sha256:1cbcdacbcc55b2fc97795a4f051429f69ff3666abbd936e08e180af93a11ab65
lastState: {}
name: spark-home-init
ready: true
restartCount: 0
state:
terminated:
containerID: docker://34701d70eec47367a928dc382326014c76fc49c95be92562e68911f36b4c6242
exitCode: 0
finishedAt: "2020-06-05T06:35:02Z"
reason: Completed
startedAt: "2020-06-05T06:35:02Z"
phase: Succeeded
podIP: 10.131.0.203
qosClass: BestEffort
startTime: "2020-06-05T06:34:37Z"
ENVIRONMENT VARIABLES:
PATH=/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=spark-xonray2
PYSPARK_PYTHON=python
PYSPARK_DRIVER_PYTHON=python
SERVICE_DOMAIN=
ZEPPELIN_HOME=/zeppelin
INTERPRETER_GROUP_ID=spark-shared_process
SPARK_HOME=
GLUSTERFS_DYNAMIC_2ECF75EB_A4D2_11EA_9E39_0050569F5F65_PORT_1_TCP_PORT=1
MONGODB_PORT_27017_TCP_PORT=27017
GLUSTERFS_DYNAMIC_3673E93E_9E97_11EA_9E39_0050569F5F65_PORT_1_TCP_PORT=1
GLUSTERFS_DYNAMIC_C6419348_9FEB_11EA_9E39_0050569F5F65_PORT_1_TCP_ADDR=172.30.50.211
ZEPPELIN_PORT_80_TCP_ADDR=172.30.57.29
MONGODB_PORT=tcp://172.30.240.109:27017
MONGODB_PORT_27017_TCP=tcp://172.30.240.109:27017
SPARK_MASTER_SVC_PORT_7077_TCP_PROTO=tcp
SPARK_MASTER_SVC_PORT_7077_TCP_ADDR=172.30.88.254
SPARK_MASTER_SVC_PORT_80_TCP=tcp://172.30.88.254:80
MONGODB_PORT_27017_TCP_PROTO=tcp
KAFKA_0_EXTERNAL_PORT=tcp://172.30.235.145:9094
KAFKA_PORT_9092_TCP=tcp://172.30.164.40:9092
KUBERNETES_PORT_53_UDP_PROTO=udp
ZOOKEEPER_PORT_2888_TCP=tcp://172.30.222.17:2888
ZEPPELIN_PORT_80_TCP=tcp://172.30.57.29:80
ZEPPELIN_PORT_80_TCP_PROTO=tcp
GLUSTERFS_DYNAMIC_2ECF75EB_A4D2_11EA_9E39_0050569F5F65_SERVICE_HOST=172.30.133.154
GLUSTERFS_DYNAMIC_025DF8B3_A642_11EA_9E39_0050569F5F65_PORT_1_TCP_PORT=1
GLUSTERFS_DYNAMIC_247E77C4_9F59_11EA_9E39_0050569F5F65_PORT_1_TCP=tcp://172.30.245.33:1
GLUSTERFS_DYNAMIC_85199C48_9E97_11EA_9E39_0050569F5F65_PORT_1_TCP=tcp://172.30.117.125:1
GLUSTERFS_DYNAMIC_2ECF75EB_A4D2_11EA_9E39_0050569F5F65_PORT_1_TCP_PROTO=tcp
ORION_PORT_80_TCP_ADDR=172.30.55.76
SPARK_MASTER_SVC_PORT_7077_TCP_PORT=7077
GLUSTERFS_DYNAMIC_025DF8B3_A642_11EA_9E39_0050569F5F65_SERVICE_HOST=172.30.229.165
GLUSTERFS_DYNAMIC_85199C48_9E97_11EA_9E39_0050569F5F65_PORT_1_TCP_PROTO=tcp
GLUSTERFS_DYNAMIC_243832C0_9F59_11EA_9E39_0050569F5F65_PORT_1_TCP_PROTO=tcp
KAFKA_0_EXTERNAL_PORT_9094_TCP_ADDR=172.30.235.145
KAFKA_PORT_9092_TCP_PORT=9092
GLUSTERFS_DYNAMIC_247E77C4_9F59_11EA_9E39_0050569F5F65_SERVICE_HOST=172.30.245.33
GLUSTERFS_DYNAMIC_247E77C4_9F59_11EA_9E39_0050569F5F65_PORT_1_TCP_PORT=1
GLUSTERFS_DYNAMIC_C6419348_9FEB_11EA_9E39_0050569F5F65_PORT_1_TCP_PROTO=tcp
ZOOKEEPER_SERVICE_HOST=172.30.222.17
ZEPPELIN_SERVICE_PORT=80
KAFKA_0_EXTERNAL_SERVICE_PORT=9094
GREENPLUM_SERVICE_PORT_HTTP=5432
KAFKA_0_EXTERNAL_SERVICE_HOST=172.30.235.145
GLUSTERFS_DYNAMIC_3673E93E_9E97_11EA_9E39_0050569F5F65_SERVICE_PORT=1
GLUSTERFS_DYNAMIC_3673E93E_9E97_11EA_9E39_0050569F5F65_PORT_1_TCP_PROTO=tcp
KUBERNETES_SERVICE_HOST=172.30.0.1
ZOOKEEPER_PORT_2181_TCP_PROTO=tcp
ZOOKEEPER_PORT_3888_TCP_PORT=3888
ORION_PORT_80_TCP_PORT=80
MONGODB_SERVICE_PORT_MONGODB=27017
KUBERNETES_PORT_443_TCP_ADDR=172.30.0.1
ZOOKEEPER_PORT_2888_TCP_ADDR=172.30.222.17
SPARK_MASTER_SVC_SERVICE_PORT_HTTP=80
GREENPLUM_SERVICE_PORT=5432
GREENPLUM_PORT_5432_TCP_PORT=5432
GLUSTERFS_DYNAMIC_85199C48_9E97_11EA_9E39_0050569F5F65_SERVICE_PORT=1
ZOOKEEPER_PORT_3888_TCP=tcp://172.30.222.17:3888
ZOOKEEPER_PORT_3888_TCP_PROTO=tcp
MONGODB_SERVICE_PORT=27017
KAFKA_SERVICE_PORT_TCP_CLIENT=9092
GLUSTERFS_DYNAMIC_C6419348_9FEB_11EA_9E39_0050569F5F65_SERVICE_HOST=172.30.50.211
ZOOKEEPER_SERVICE_PORT_TCP_CLIENT=2181
ZOOKEEPER_SERVICE_PORT_FOLLOWER=2888
KAFKA_SERVICE_PORT=9092
SPARK_MASTER_SVC_PORT_80_TCP_PORT=80
GLUSTERFS_DYNAMIC_C6419348_9FEB_11EA_9E39_0050569F5F65_PORT_1_TCP=tcp://172.30.50.211:1
ORION_SERVICE_HOST=172.30.55.76
KAFKA_PORT_9092_TCP_PROTO=tcp
GLUSTERFS_DYNAMIC_85199C48_9E97_11EA_9E39_0050569F5F65_PORT_1_TCP_PORT=1
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_53_UDP=udp://172.30.0.1:53
KUBERNETES_PORT_53_UDP_ADDR=172.30.0.1
GLUSTERFS_DYNAMIC_243832C0_9F59_11EA_9E39_0050569F5F65_PORT_1_TCP_PORT=1
GLUSTERFS_DYNAMIC_025DF8B3_A642_11EA_9E39_0050569F5F65_PORT_1_TCP_PROTO=tcp
ZOOKEEPER_PORT_3888_TCP_ADDR=172.30.222.17
ZEPPELIN_SERVICE_PORT_HTTP=80
ORION_PORT_80_TCP=tcp://172.30.55.76:80
GREENPLUM_PORT_5432_TCP_PROTO=tcp
SPARK_MASTER_SVC_PORT=tcp://172.30.88.254:7077
GLUSTERFS_DYNAMIC_243832C0_9F59_11EA_9E39_0050569F5F65_SERVICE_HOST=172.30.178.127
MONGODB_SERVICE_HOST=172.30.240.109
GLUSTERFS_DYNAMIC_247E77C4_9F59_11EA_9E39_0050569F5F65_SERVICE_PORT=1
GLUSTERFS_DYNAMIC_247E77C4_9F59_11EA_9E39_0050569F5F65_PORT_1_TCP_ADDR=172.30.245.33
KUBERNETES_PORT_53_TCP_ADDR=172.30.0.1
GLUSTERFS_DYNAMIC_243832C0_9F59_11EA_9E39_0050569F5F65_PORT=tcp://172.30.178.127:1
ORION_PORT=tcp://172.30.55.76:80
GREENPLUM_PORT_5432_TCP_ADDR=172.30.0.147
GLUSTERFS_DYNAMIC_025DF8B3_A642_11EA_9E39_0050569F5F65_PORT_1_TCP=tcp://172.30.229.165:1
GLUSTERFS_DYNAMIC_C6419348_9FEB_11EA_9E39_0050569F5F65_PORT=tcp://172.30.50.211:1
ORION_SERVICE_PORT=80
ORION_PORT_80_TCP_PROTO=tcp
KAFKA_0_EXTERNAL_PORT_9094_TCP=tcp://172.30.235.145:9094
GLUSTERFS_DYNAMIC_3673E93E_9E97_11EA_9E39_0050569F5F65_PORT=tcp://172.30.167.19:1
GLUSTERFS_DYNAMIC_025DF8B3_A642_11EA_9E39_0050569F5F65_SERVICE_PORT=1
GLUSTERFS_DYNAMIC_025DF8B3_A642_11EA_9E39_0050569F5F65_PORT_1_TCP_ADDR=172.30.229.165
GLUSTERFS_DYNAMIC_247E77C4_9F59_11EA_9E39_0050569F5F65_PORT_1_TCP_PROTO=tcp
KAFKA_0_EXTERNAL_SERVICE_PORT_TCP_KAFKA=9094
KAFKA_0_EXTERNAL_PORT_9094_TCP_PROTO=tcp
SPARK_MASTER_SVC_SERVICE_HOST=172.30.88.254
KUBERNETES_SERVICE_PORT_DNS_TCP=53
KUBERNETES_PORT_53_UDP_PORT=53
GLUSTERFS_DYNAMIC_243832C0_9F59_11EA_9E39_0050569F5F65_PORT_1_TCP=tcp://172.30.178.127:1
ZEPPELIN_SERVICE_HOST=172.30.57.29
GLUSTERFS_DYNAMIC_2ECF75EB_A4D2_11EA_9E39_0050569F5F65_SERVICE_PORT=1
SPARK_MASTER_SVC_PORT_80_TCP_ADDR=172.30.88.254
KUBERNETES_PORT=tcp://172.30.0.1:443
ZOOKEEPER_PORT_2181_TCP_PORT=2181
ZOOKEEPER_PORT_2888_TCP_PROTO=tcp
SPARK_MASTER_SVC_SERVICE_PORT=7077
GLUSTERFS_DYNAMIC_247E77C4_9F59_11EA_9E39_0050569F5F65_PORT=tcp://172.30.245.33:1
GLUSTERFS_DYNAMIC_025DF8B3_A642_11EA_9E39_0050569F5F65_PORT=tcp://172.30.229.165:1
GLUSTERFS_DYNAMIC_C6419348_9FEB_11EA_9E39_0050569F5F65_PORT_1_TCP_PORT=1
ZOOKEEPER_SERVICE_PORT_TCP_ELECTION=3888
ZOOKEEPER_PORT=tcp://172.30.222.17:2181
ZOOKEEPER_PORT_2181_TCP_ADDR=172.30.222.17
SPARK_MASTER_SVC_PORT_7077_TCP=tcp://172.30.88.254:7077
KUBERNETES_SERVICE_PORT_DNS=53
KUBERNETES_PORT_443_TCP=tcp://172.30.0.1:443
ZEPPELIN_PORT_80_TCP_PORT=80
KAFKA_0_EXTERNAL_PORT_9094_TCP_PORT=9094
GREENPLUM_SERVICE_HOST=172.30.0.147
GLUSTERFS_DYNAMIC_85199C48_9E97_11EA_9E39_0050569F5F65_PORT=tcp://172.30.117.125:1
GLUSTERFS_DYNAMIC_2ECF75EB_A4D2_11EA_9E39_0050569F5F65_PORT=tcp://172.30.133.154:1
GLUSTERFS_DYNAMIC_2ECF75EB_A4D2_11EA_9E39_0050569F5F65_PORT_1_TCP_ADDR=172.30.133.154
GLUSTERFS_DYNAMIC_3673E93E_9E97_11EA_9E39_0050569F5F65_PORT_1_TCP_ADDR=172.30.167.19
KUBERNETES_PORT_53_TCP_PROTO=tcp
GLUSTERFS_DYNAMIC_243832C0_9F59_11EA_9E39_0050569F5F65_SERVICE_PORT=1
ORION_SERVICE_PORT_HTTP=80
GLUSTERFS_DYNAMIC_3673E93E_9E97_11EA_9E39_0050569F5F65_PORT_1_TCP=tcp://172.30.167.19:1
SPARK_MASTER_SVC_SERVICE_PORT_CLUSTER=7077
KAFKA_SERVICE_HOST=172.30.164.40
GREENPLUM_PORT=tcp://172.30.0.147:5432
GLUSTERFS_DYNAMIC_85199C48_9E97_11EA_9E39_0050569F5F65_PORT_1_TCP_ADDR=172.30.117.125
KUBERNETES_PORT_53_TCP=tcp://172.30.0.1:53
GLUSTERFS_DYNAMIC_243832C0_9F59_11EA_9E39_0050569F5F65_PORT_1_TCP_ADDR=172.30.178.127
ZEPPELIN_PORT=tcp://172.30.57.29:80
KAFKA_PORT=tcp://172.30.164.40:9092
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_53_TCP_PORT=53
SPARK_MASTER_SVC_PORT_80_TCP_PROTO=tcp
KUBERNETES_SERVICE_PORT=443
GLUSTERFS_DYNAMIC_C6419348_9FEB_11EA_9E39_0050569F5F65_SERVICE_PORT=1
GLUSTERFS_DYNAMIC_85199C48_9E97_11EA_9E39_0050569F5F65_SERVICE_HOST=172.30.117.125
MONGODB_PORT_27017_TCP_ADDR=172.30.240.109
GREENPLUM_PORT_5432_TCP=tcp://172.30.0.147:5432
KUBERNETES_PORT_443_TCP_PROTO=tcp
KAFKA_PORT_9092_TCP_ADDR=172.30.164.40
ZOOKEEPER_SERVICE_PORT=2181
ZOOKEEPER_PORT_2181_TCP=tcp://172.30.222.17:2181
ZOOKEEPER_PORT_2888_TCP_PORT=2888
GLUSTERFS_DYNAMIC_2ECF75EB_A4D2_11EA_9E39_0050569F5F65_PORT_1_TCP=tcp://172.30.133.154:1
GLUSTERFS_DYNAMIC_3673E93E_9E97_11EA_9E39_0050569F5F65_SERVICE_HOST=172.30.167.19
Z_VERSION=0.9.0-preview1
LOG_TAG=[ZEPPELIN_0.9.0-preview1]:
Z_HOME=/zeppelin
LANG=en_US.UTF-8
LC_ALL=en_US.UTF-8
ZEPPELIN_ADDR=0.0.0.0
JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
HOME=/
Judging by the values of ENVIRONMENT VARIABLES, you also have a service with a metadata name 'spark-master'. Change this name, example:
apiVersion: v1
kind: Service
metadata:
name: "master-spark-service"
spec:
ports:
- name: spark
port: 7077
targetPort: 7077
selector:
component: "spark-master"
type: ClusterIP
In this case, the kubernetes will not override the port value.
ZEPPELIN_PORT is set by k8s service discovery, because your pod/service name is zeppelin!
Just change pod / service name by something else, or disable discovery env variables, see https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#accessing-the-service, this is just enableServiceLinks: false in your zeppelin pod template definition.

How to keep latest version of cloudbuild.yaml seperate in the cloud storage

My cloudbuild.yaml consists of:
steps:
- name: maven:3.6.0-jdk-8-slim
entrypoint: 'mvn'
args: ["clean","install","-PgenericApiSuite","-pl", "api-testing", "-am", "-B"]
- name: 'gcr.io/cloud-builders/gsutil'
args: ['-m', 'cp', '-r', '/workspace/api-testing/target/cucumber-html-reports', 'gs://testing-reports/$BUILD_ID']
But every time it runs now my bucket shows the reports with its build_id.
Is there a way I can keep the latest report separate from the rest?
Sadly symbolic link doesn't exist in Cloud Storage. For achieving what you want, you have to handle this manually with these 2 steps at the end of your job:
# delete the previous existing latest directory
- name: 'gcr.io/cloud-builders/gsutil'
args: ['-m', 'rm', '-r', 'gs://testing-reports/latest']
# copy the most recent file into the latest directory
- name: 'gcr.io/cloud-builders/gsutil'
args: ['-m', 'cp', '-r', 'gs://testing-reports/$BUILD_ID', 'gs://testing-reports/latest']

How to fix 'pod has unbound immediate PersistentVolumeClaims' in SQL 2019 Big Data?

I want to set up simple persistent storage on Kubernetes for SQL 2019 Big Data on prem. But keeps throwing an event 'pod has unbound immediate PersistentVolumeClaims'.
When I deploy the image, pod mssql-controller shows an event:
Name: mssql-controller-6vd8b
Namespace: sql2019
Priority: 0
PriorityClassName: <none>
Node: <none>
Labels: MSSQL_CLUSTER=sql2019
app=mssql-controller
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/mssql-controller
Containers:
mssql-controller:
Image: private-repo.microsoft.com/mssql-private-preview/mssql-controller:latest
Port: 8081/TCP
Host Port: 0/TCP
Environment:
ACCEPT_EULA: Y
CONTROLLER_ENABLE_TDS_PROXY: false
KUBERNETES_NAMESPACE: sql2019 (v1:metadata.namespace)
Mounts:
/root/secrets/controller-db from controller-db-secret (ro)
/root/secrets/controller-login from controller-login-secret (ro)
/root/secrets/knox from controller-knox-secret (ro)
/root/secrets/node-admin-login from node-admin-login-secret (ro)
/var/opt from controller-storage (rw)
/var/opt/controller/config from controller-config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from sa-mssql-controller-token-4fsbc (ro)
mssql-portal:
Image: private-repo.microsoft.com/mssql-private-preview/mssql-portal:latest
Port: 6001/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/var/opt from controller-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from sa-mssql-controller-token-4fsbc (ro)
mssql-server-controller:
Image: private-repo.microsoft.com/mssql-private-preview/mssql-server-controller:latest
Port: 1433/TCP
Host Port: 0/TCP
Environment:
ACCEPT_EULA: Y
SA_PASSWORD: <password removed>
Mounts:
/var/opt from controller-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from sa-mssql-controller-token-4fsbc (ro)
mssql-monitor-fluentbit:
Image: private-repo.microsoft.com/mssql-private-preview/mssql-monitor-fluentbit:latest
Port: 2020/TCP
Host Port: 0/TCP
Limits:
memory: 100Mi
Requests:
cpu: 100m
memory: 100Mi
Environment:
FLUENT_ELASTICSEARCH_HOST: service-monitor-elasticsearch
FLUENT_ELASTICSEARCH_PORT: 9200
FLUENTBIT_CONFIG: fluentbit-controller.conf
KUBERNETES_NAMESPACE: sql2019 (v1:metadata.namespace)
KUBERNETES_NODE_NAME: (v1:spec.nodeName)
KUBERNETES_POD_NAME: mssql-controller-6vd8b (v1:metadata.name)
Mounts:
/var/opt from controller-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from sa-mssql-controller-token-4fsbc (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
controller-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mssql-controller-pvc
ReadOnly: false
controller-login-secret:
Type: Secret (a volume populated by a Secret)
SecretName: controller-login-secret
Optional: false
controller-db-secret:
Type: Secret (a volume populated by a Secret)
SecretName: controller-db-secret
Optional: false
controller-knox-secret:
Type: Secret (a volume populated by a Secret)
SecretName: controller-knox-secret
Optional: false
node-admin-login-secret:
Type: Secret (a volume populated by a Secret)
SecretName: node-admin-login-secret
Optional: false
controller-config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: mssql-controller-config
Optional: false
sa-mssql-controller-token-4fsbc:
Type: Secret (a volume populated by a Secret)
SecretName: sa-mssql-controller-token-4fsbc
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 1s (x6 over 4s) default-scheduler pod has unbound immediate PersistentVolumeClaims
Cluster config:
export USE_PERSISTENT_VOLUME=true
export STORAGE_CLASS_NAME=slow
StorageClass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
replication-type: none
I'm quite new to Kubernetes, but these are the things I know:
I need Dynamic Volume Provisioning
I need to create Storage Class (SQL 2019 creates Persistent Volume and Persistent Volume Claim)
In my case it occured while using minikube. I believe it can occur even on any cloud provider.
The reason for this is because one or more PVCs may have their storage specification request more disk space than is available.
Check your
...
resources:
requests:
storage: <value must be <= than available space>
I hope this helps someone.

Mongo transation exception when using the latest Spring Data Mongo reactive

When I tried the transaction feature with Mongo 4 and the latest Spring Data Mongo Reactive, I got the failure like this.
18:57:22.823 [main] ERROR org.mongodb.driver.client - Callback onResult call produced an error
reactor.core.Exceptions$ErrorCallbackNotImplemented: com.mongodb.MongoClientException: Sessions are not supported by the MongoDB cluster to which this client is connected
Caused by: com.mongodb.MongoClientException: Sessions are not supported by the MongoDB cluster to which this client is connected
at com.mongodb.async.client.MongoClientImpl$1.onResult(MongoClientImpl.java:90)
at com.mongodb.async.client.MongoClientImpl$1.onResult(MongoClientImpl.java:83)
at com.mongodb.async.client.ClientSessionHelper$2.onResult(ClientSessionHelper.java:80)
at com.mongodb.async.client.ClientSessionHelper$2.onResult(ClientSessionHelper.java:73)
at com.mongodb.internal.connection.BaseCluster$ServerSelectionRequest.onResult(BaseCluster.java:433)
at com.mongodb.internal.connection.BaseCluster.handleServerSelectionRequest(BaseCluster.java:297)
at com.mongodb.internal.connection.BaseCluster.selectServerAsync(BaseCluster.java:157)
at com.mongodb.internal.connection.SingleServerCluster.selectServerAsync(SingleServerCluster.java:41)
at com.mongodb.async.client.ClientSessionHelper.createClientSession(ClientSessionHelper.java:68)
at com.mongodb.async.client.MongoClientImpl.startSession(MongoClientImpl.java:83)
at com.mongodb.reactivestreams.client.internal.MongoClientImpl$1.apply(MongoClientImpl.java:153)
at com.mongodb.reactivestreams.client.internal.MongoClientImpl$1.apply(MongoClientImpl.java:150)
at com.mongodb.async.client.SingleResultCallbackSubscription.requestInitialData(SingleResultCallbackSubscription.java:38)
at com.mongodb.async.client.AbstractSubscription.tryRequestInitialData(AbstractSubscription.java:153)
at com.mongodb.async.client.AbstractSubscription.request(AbstractSubscription.java:84)
at com.mongodb.reactivestreams.client.internal.ObservableToPublisher$1$1.request(ObservableToPublisher.java:50)
at reactor.core.publisher.MonoNext$NextSubscriber.request(MonoNext.java:102)
at reactor.core.publisher.MonoProcessor.onSubscribe(MonoProcessor.java:399)
at reactor.core.publisher.MonoNext$NextSubscriber.onSubscribe(MonoNext.java:64)
at com.mongodb.reactivestreams.client.internal.ObservableToPublisher$1.onSubscribe(ObservableToPublisher.java:39)
at com.mongodb.async.client.SingleResultCallbackSubscription.<init>(SingleResultCallbackSubscription.java:33)
at com.mongodb.async.client.Observables$2.subscribe(Observables.java:76)
at com.mongodb.reactivestreams.client.internal.ObservableToPublisher.subscribe(ObservableToPublisher.java:36)
at reactor.core.publisher.MonoFromPublisher.subscribe(MonoFromPublisher.java:43)
at reactor.core.publisher.Mono.subscribe(Mono.java:3555)
at reactor.core.publisher.MonoProcessor.add(MonoProcessor.java:531)
at reactor.core.publisher.MonoProcessor.subscribe(MonoProcessor.java:444)
at reactor.core.publisher.MonoFlatMapMany.subscribe(MonoFlatMapMany.java:49)
at reactor.core.publisher.Flux.subscribe(Flux.java:7677)
at reactor.core.publisher.Flux.subscribeWith(Flux.java:7841)
at reactor.core.publisher.Flux.subscribe(Flux.java:7670)
at reactor.core.publisher.Flux.subscribe(Flux.java:7634)
at com.example.demo.DataInitializer.init(DataInitializer.java:42)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.context.event.ApplicationListenerMethodAdapter.doInvoke(ApplicationListenerMethodAdapter.java:261)
at org.springframework.context.event.ApplicationListenerMethodAdapter.processEvent(ApplicationListenerMethodAdapter.java:180)
at org.springframework.context.event.ApplicationListenerMethodAdapter.onApplicationEvent(ApplicationListenerMethodAdapter.java:142)
at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:172)
at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:165)
at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:139)
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:398)
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:355)
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:884)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:551)
at org.springframework.context.annotation.AnnotationConfigApplicationContext.<init>(AnnotationConfigApplicationContext.java:88)
at com.example.demo.Application.main(Application.java:24)
I used a initialization class to initialize this class.
#Component
#Slf4j
class DataInitializer {
private final ReactiveMongoOperations mongoTemplate;
public DataInitializer(ReactiveMongoOperations mongoTemplate) {
this.mongoTemplate = mongoTemplate;
}
#EventListener(value = ContextRefreshedEvent.class)
public void init() {
log.info("start data initialization ...");
this.mongoTemplate.inTransaction()
.execute(
s ->
Flux
.just("Post one", "Post two")
.flatMap(
title -> s.insert(Post.builder().title(title).content("content of " + title).build())
)
)
.subscribe(
null,
null,
() -> log.info("done data initialization...")
);
}
}
The subscribe caused this exception.
The source code is pushed to my github.
I just replace the content of DataInitializer with the new mongoTemplate.inTransaction().
PS: I used the latest Mongo in a Docker container to serve the mongodb service, at the moment, it was 4.0.1. The Docker console shows:
mongodb_1 | 2018-08-20T15:56:04.434+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
mongodb_1 | 2018-08-20T15:56:04.447+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=12635c1c3d2d
mongodb_1 | 2018-08-20T15:56:04.447+0000 I CONTROL [initandlisten] db version v4.0.1
mongodb_1 | 2018-08-20T15:56:04.448+0000 I CONTROL [initandlisten] git version: 54f1582fc6eb01de4d4c42f26fc133e623f065fb
UPDATE: When I tried to start up Mongo servers as Replica Set via a Docker Compose file:
version: "3"
services:
mongo1:
hostname: mongo1
container_name: localmongo1
image: mongo:4.0-xenial
ports:
- "27017:27017"
restart: always
entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0" ]
mongo2:
hostname: mongo2
container_name: localmongo2
image: mongo:4.0-xenial
ports:
- "27018:27017"
restart: always
entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0" ]
mongo3:
hostname: mongo3
container_name: localmongo3
image: mongo:4.0-xenial
ports:
- "27019:27017"
restart: always
entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0" ]
And change the Mongo uri string to:
mongodb://localhost:27017,localhost:27018,localhost:27019/blog
Then got the failure info like:
11:08:20.845 [main] INFO org.mongodb.driver.cluster - No server chosen by com.mongodb.async.client.ClientSessionHelper$1#796d3c9f from cluster description ClusterDescription{type=UNKNOWN,

Resources