(MongoSocketOpenException): Exception opening socket | Unknown host: wisdb1 - database

I tried to connect MongoDb replica-set (3 nodes) with docker.
This is my docker-compose file, i renamed all services from e.g. "mongo1" to "mongoa1" because i have a second app with the same config file:
version: "3.8"
services:
mongoa1:
image: mongo:4
container_name: mongoa1
command: ["--replSet", "my-replica-set", "--bind_ip_all", "--port", "30001"]
volumes:
- ./data/mongoa-1:/data/db
ports:
- 30001:30001
healthcheck:
test: test $$(echo "rs.initiate({_id:'my-replica-set',members:[{_id:0,host:\"mongoa1:30001\"},{_id:1,host:\"mongoa2:30002\"},{_id:2,host:\"mongoa3:30003\"}]}).ok || rs.status().ok" | mongo --port 30001 --quiet) -eq 1
interval: 10s
start_period: 30s
mongoa2:
image: mongo:4
container_name: mongoa2
command: ["--replSet", "my-replica-set", "--bind_ip_all", "--port", "30002"]
volumes:
- ./data/mongoa-2:/data/db
ports:
- 30002:30002
mongoa3:
image: mongo:4
container_name: mongoa3
command: ["--replSet", "my-replica-set", "--bind_ip_all", "--port", "30003"]
volumes:
- ./data/mongoa-3:/data/db
ports:
- 30003:30003
Container are running.
❯ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fb1fcab13804 mongo:4 "docker-entrypoint.s…" 3 minutes ago Up 3 minutes 27017/tcp, 0.0.0.0:30003->30003/tcp mongoa3
72f8cfe217a5 mongo:4 "docker-entrypoint.s…" 3 minutes ago Up 3 minutes (healthy) 27017/tcp, 0.0.0.0:30001->30001/tcp mongoa1
2a61246f5d17 mongo:4 "docker-entrypoint.s…" 3 minutes ago Up 3 minutes 27017/tcp, 0.0.0.0:30002->30002/tcp mongoa2
I want to open Studio T3 but i get the following error:
Db path: mongodb://mongoa1:30001,mongoa2:30002,mongoa3:30003/app?replicaSet=my-replica-set
Connection failed.
SERVER [mongoa1:30001] (Type: UNKNOWN)
|_/ Connection error (MongoSocketOpenException): Exception opening socket
|____/ Unknown host: mongoa1
SERVER [mongoa2:30002] (Type: UNKNOWN)
|_/ Connection error (MongoSocketOpenException): Exception opening socket
|____/ Unknown host: mongoa2
SERVER [mongoa3:30003] (Type: UNKNOWN)
|_/ Connection error (MongoSocketOpenException): Exception opening socket
|____/ Unknown host: mongoa3
Details:
Timed out after 30000 ms while waiting for a server that matches com.mongodb.client.internal.MongoClientDelegate$1#4d84ad57. Client view of cluster state is {type=REPLICA_SET, servers=[{address=mongoa1:30001, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.UnknownHostException: mongoa1}}, {address=mongoa2:30002, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.UnknownHostException: mongoa2}}, {address=mongoa3:30003, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.UnknownHostException: mongoa3}}]
I don't understand what's wrong. When i rename back to "mongoa1" to "mongo1" it works. But i always have to delete the other docker app and i don't want. Whats wrong in my config?

Related

Zeppelin k8s: change interpreter pod configuration

I've configured my zeppelin on kubernetes using:
apiVersion: apps/v1
kind: Deployment
metadata:
name: zeppelin
labels: [...]
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: zeppelin
app.kubernetes.io/instance: zeppelin
template:
metadata:
labels:
app.kubernetes.io/name: zeppelin
app.kubernetes.io/instance: zeppelin
spec:
serviceAccountName: zeppelin
containers:
- name: zeppelin
image: "apache/zeppelin:0.9.0"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8080
protocol: TCP
[...]
env:
- name: ZEPPELIN_PORT
value: "8080"
- name: ZEPPELIN_K8S_CONTAINER_IMAGE
value: apache/zeppelin:0.9.0
- name: ZEPPELIN_RUN_MODE
value: k8s
- name: ZEPPELIN_K8S_SPARK_CONTAINER_IMAGE
value: docker-registry.default.svc:5000/ra-iot-dev/spark:2.4.5
When a new paragraph job is performed, zeppelin since is running in k8s mode creates a pod:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
spark-ghbvld 0/1 Completed 0 9m -----<<<<<<<
spark-master-0 1/1 Running 0 38m
spark-worker-0 1/1 Running 0 38m
zeppelin-6cc658d59f-gk2lp 1/1 Running 0 24m
Shortly, this container is engaged to first copy spark home folder from ZEPPELIN_K8S_SPARK_CONTAINER_IMAGE into main container and then executes interpreter.
Problem arises here:
I'm getting this error message on created pod:
Interpreter launch command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Dfile.encoding=UTF-8 -Dlog4j.configuration=file:///zeppelin/conf/log4j.properties -Dzeppelin.log.file='/zeppelin/logs/zeppelin-interpreter-spark-shared_process--spark-ghbvld.log' -Xms1024m -Xmx2048m -XX:MaxPermSize=512m -cp ":/zeppelin/interpreter/spark/dep/*:/zeppelin/interpreter/spark/*::/zeppelin/interpreter/zeppelin-interpreter-shaded-0.9.0-preview1.jar" org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer zeppelin-6cc658d59f-gk2lp.ra-iot-dev.svc 36161 "spark-shared_process" 12321:12321
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/zeppelin/interpreter/spark/dep/zeppelin-spark-dependencies-0.9.0-preview1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/zeppelin/interpreter/spark/spark-interpreter-0.9.0-preview1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
WARN [2020-06-05 06:35:05,694] ({main} ZeppelinConfiguration.java[create]:159) - Failed to load configuration, proceeding with a default
INFO [2020-06-05 06:35:05,745] ({main} ZeppelinConfiguration.java[create]:171) - Server Host: 0.0.0.0
Exception in thread "main" java.lang.NumberFormatException: For input string: "tcp://172.30.203.33:80"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:580)
at java.lang.Integer.parseInt(Integer.java:615)
at org.apache.zeppelin.conf.ZeppelinConfiguration.getInt(ZeppelinConfiguration.java:248)
at org.apache.zeppelin.conf.ZeppelinConfiguration.getInt(ZeppelinConfiguration.java:243)
at org.apache.zeppelin.conf.ZeppelinConfiguration.getServerPort(ZeppelinConfiguration.java:327)
at org.apache.zeppelin.conf.ZeppelinConfiguration.create(ZeppelinConfiguration.java:173)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer.<init>(RemoteInterpreterServer.java:144)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer.<init>(RemoteInterpreterServer.java:152)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer.main(RemoteInterpreterServer.java:321)
As you can see, main problem is:
Exception in thread "main" java.lang.NumberFormatException: For input string: "tcp://172.30.203.33:80"
I've tried to add zeppelin.server.port property on interpreter configuration using zeppelin web frontend, navigating to Interpreters -> Spark Interpreter -> add property, see bellow:
However, the problem keeps going.
Any ideas about how to override zeppelin.server.port, or ZEPPELIN_PORT on generated interpreter pod?
I also dump interpreter pod manifest created by zeppelin:
$ kubectl get pods -o=yaml spark-ghbvld
apiVersion: v1
kind: Pod
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"app":"spark-ghbvld","interpreterGroupId":"spark-shared_process","interpreterSettingName":"spark"},"name":"spark-ghbvld","namespace":"ra-iot-dev"},"spec":{"automountServiceAccountToken":true,"containers":[{"command":["sh","-c","$(ZEPPELIN_HOME)/bin/interpreter.sh -d $(ZEPPELIN_HOME)/interpreter/spark -r 12321:12321 -c zeppelin-6cc658d59f-gk2lp.ra-iot-dev.svc -p 36161 -i spark-shared_process -l /tmp/local-repo -g spark"],"env":[{"name":"PYSPARK_PYTHON","value":"python"},{"name":"PYSPARK_DRIVER_PYTHON","value":"python"},{"name":"SERVICE_DOMAIN","value":null},{"name":"ZEPPELIN_HOME","value":"/zeppelin"},{"name":"INTERPRETER_GROUP_ID","value":"spark-shared_process"},{"name":"SPARK_HOME","value":null}],"image":"apache/zeppelin:0.9.0","lifecycle":{"preStop":{"exec":{"command":["sh","-c","ps -ef | grep org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer | grep -v grep | awk '{print $2}' | xargs kill"]}}},"name":"spark","volumeMounts":[{"mountPath":"/spark","name":"spark-home"}]}],"initContainers":[{"command":["sh","-c","cp -r /opt/spark/* /spark/"],"image":"docker-registry.default.svc:5000/ra-iot-dev/spark:2.4.5","name":"spark-home-init","volumeMounts":[{"mountPath":"/spark","name":"spark-home"}]}],"restartPolicy":"Never","terminationGracePeriodSeconds":30,"volumes":[{"emptyDir":{},"name":"spark-home"}]}}
openshift.io/scc: anyuid
creationTimestamp: "2020-06-05T06:34:36Z"
labels:
app: spark-ghbvld
interpreterGroupId: spark-shared_process
interpreterSettingName: spark
name: spark-ghbvld
namespace: ra-iot-dev
resourceVersion: "224863130"
selfLink: /api/v1/namespaces/ra-iot-dev/pods/spark-ghbvld
uid: a04a0d70-a6f6-11ea-9e39-0050569f5f65
spec:
automountServiceAccountToken: true
containers:
- command:
- sh
- -c
- $(ZEPPELIN_HOME)/bin/interpreter.sh -d $(ZEPPELIN_HOME)/interpreter/spark -r
12321:12321 -c zeppelin-6cc658d59f-gk2lp.ra-iot-dev.svc -p 36161 -i spark-shared_process
-l /tmp/local-repo -g spark
env:
- name: PYSPARK_PYTHON
value: python
- name: PYSPARK_DRIVER_PYTHON
value: python
- name: SERVICE_DOMAIN
- name: ZEPPELIN_HOME
value: /zeppelin
- name: INTERPRETER_GROUP_ID
value: spark-shared_process
- name: SPARK_HOME
image: apache/zeppelin:0.9.0
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- sh
- -c
- ps -ef | grep org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer
| grep -v grep | awk '{print $2}' | xargs kill
name: spark
resources: {}
securityContext:
capabilities:
drop:
- MKNOD
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /spark
name: spark-home
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-n4lpw
readOnly: true
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: default-dockercfg-qs7sj
initContainers:
- command:
- sh
- -c
- cp -r /opt/spark/* /spark/
image: docker-registry.default.svc:5000/ra-iot-dev/spark:2.4.5
imagePullPolicy: IfNotPresent
name: spark-home-init
resources: {}
securityContext:
capabilities:
drop:
- MKNOD
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /spark
name: spark-home
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-n4lpw
readOnly: true
nodeName: node2.si-origin-cluster.t-systems.es
nodeSelector:
region: primary
priority: 0
restartPolicy: Never
schedulerName: default-scheduler
securityContext:
seLinuxOptions:
level: s0:c30,c0
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: spark-home
- name: default-token-n4lpw
secret:
defaultMode: 420
secretName: default-token-n4lpw
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2020-06-05T06:35:03Z"
reason: PodCompleted
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2020-06-05T06:35:07Z"
reason: PodCompleted
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: null
reason: PodCompleted
status: "False"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2020-06-05T06:34:37Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://8c3977241a20be1600180525e4f8b737c8dc5954b6dc0826a7fc703ff6020a70
image: docker.io/apache/zeppelin:0.9.0
imageID: docker-pullable://docker.io/apache/zeppelin#sha256:0691909f6884319d366f5d3a5add8802738d6240a83b2e53e980caeb6c658092
lastState: {}
name: spark
ready: false
restartCount: 0
state:
terminated:
containerID: docker://8c3977241a20be1600180525e4f8b737c8dc5954b6dc0826a7fc703ff6020a70
exitCode: 0
finishedAt: "2020-06-05T06:35:05Z"
reason: Completed
startedAt: "2020-06-05T06:35:05Z"
hostIP: 10.49.160.21
initContainerStatuses:
- containerID: docker://34701d70eec47367a928dc382326014c76fc49c95be92562e68911f36b4c6242
image: docker-registry.default.svc:5000/ra-iot-dev/spark:2.4.5
imageID: docker-pullable://docker-registry.default.svc:5000/ra-iot-dev/spark#sha256:1cbcdacbcc55b2fc97795a4f051429f69ff3666abbd936e08e180af93a11ab65
lastState: {}
name: spark-home-init
ready: true
restartCount: 0
state:
terminated:
containerID: docker://34701d70eec47367a928dc382326014c76fc49c95be92562e68911f36b4c6242
exitCode: 0
finishedAt: "2020-06-05T06:35:02Z"
reason: Completed
startedAt: "2020-06-05T06:35:02Z"
phase: Succeeded
podIP: 10.131.0.203
qosClass: BestEffort
startTime: "2020-06-05T06:34:37Z"
ENVIRONMENT VARIABLES:
PATH=/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=spark-xonray2
PYSPARK_PYTHON=python
PYSPARK_DRIVER_PYTHON=python
SERVICE_DOMAIN=
ZEPPELIN_HOME=/zeppelin
INTERPRETER_GROUP_ID=spark-shared_process
SPARK_HOME=
GLUSTERFS_DYNAMIC_2ECF75EB_A4D2_11EA_9E39_0050569F5F65_PORT_1_TCP_PORT=1
MONGODB_PORT_27017_TCP_PORT=27017
GLUSTERFS_DYNAMIC_3673E93E_9E97_11EA_9E39_0050569F5F65_PORT_1_TCP_PORT=1
GLUSTERFS_DYNAMIC_C6419348_9FEB_11EA_9E39_0050569F5F65_PORT_1_TCP_ADDR=172.30.50.211
ZEPPELIN_PORT_80_TCP_ADDR=172.30.57.29
MONGODB_PORT=tcp://172.30.240.109:27017
MONGODB_PORT_27017_TCP=tcp://172.30.240.109:27017
SPARK_MASTER_SVC_PORT_7077_TCP_PROTO=tcp
SPARK_MASTER_SVC_PORT_7077_TCP_ADDR=172.30.88.254
SPARK_MASTER_SVC_PORT_80_TCP=tcp://172.30.88.254:80
MONGODB_PORT_27017_TCP_PROTO=tcp
KAFKA_0_EXTERNAL_PORT=tcp://172.30.235.145:9094
KAFKA_PORT_9092_TCP=tcp://172.30.164.40:9092
KUBERNETES_PORT_53_UDP_PROTO=udp
ZOOKEEPER_PORT_2888_TCP=tcp://172.30.222.17:2888
ZEPPELIN_PORT_80_TCP=tcp://172.30.57.29:80
ZEPPELIN_PORT_80_TCP_PROTO=tcp
GLUSTERFS_DYNAMIC_2ECF75EB_A4D2_11EA_9E39_0050569F5F65_SERVICE_HOST=172.30.133.154
GLUSTERFS_DYNAMIC_025DF8B3_A642_11EA_9E39_0050569F5F65_PORT_1_TCP_PORT=1
GLUSTERFS_DYNAMIC_247E77C4_9F59_11EA_9E39_0050569F5F65_PORT_1_TCP=tcp://172.30.245.33:1
GLUSTERFS_DYNAMIC_85199C48_9E97_11EA_9E39_0050569F5F65_PORT_1_TCP=tcp://172.30.117.125:1
GLUSTERFS_DYNAMIC_2ECF75EB_A4D2_11EA_9E39_0050569F5F65_PORT_1_TCP_PROTO=tcp
ORION_PORT_80_TCP_ADDR=172.30.55.76
SPARK_MASTER_SVC_PORT_7077_TCP_PORT=7077
GLUSTERFS_DYNAMIC_025DF8B3_A642_11EA_9E39_0050569F5F65_SERVICE_HOST=172.30.229.165
GLUSTERFS_DYNAMIC_85199C48_9E97_11EA_9E39_0050569F5F65_PORT_1_TCP_PROTO=tcp
GLUSTERFS_DYNAMIC_243832C0_9F59_11EA_9E39_0050569F5F65_PORT_1_TCP_PROTO=tcp
KAFKA_0_EXTERNAL_PORT_9094_TCP_ADDR=172.30.235.145
KAFKA_PORT_9092_TCP_PORT=9092
GLUSTERFS_DYNAMIC_247E77C4_9F59_11EA_9E39_0050569F5F65_SERVICE_HOST=172.30.245.33
GLUSTERFS_DYNAMIC_247E77C4_9F59_11EA_9E39_0050569F5F65_PORT_1_TCP_PORT=1
GLUSTERFS_DYNAMIC_C6419348_9FEB_11EA_9E39_0050569F5F65_PORT_1_TCP_PROTO=tcp
ZOOKEEPER_SERVICE_HOST=172.30.222.17
ZEPPELIN_SERVICE_PORT=80
KAFKA_0_EXTERNAL_SERVICE_PORT=9094
GREENPLUM_SERVICE_PORT_HTTP=5432
KAFKA_0_EXTERNAL_SERVICE_HOST=172.30.235.145
GLUSTERFS_DYNAMIC_3673E93E_9E97_11EA_9E39_0050569F5F65_SERVICE_PORT=1
GLUSTERFS_DYNAMIC_3673E93E_9E97_11EA_9E39_0050569F5F65_PORT_1_TCP_PROTO=tcp
KUBERNETES_SERVICE_HOST=172.30.0.1
ZOOKEEPER_PORT_2181_TCP_PROTO=tcp
ZOOKEEPER_PORT_3888_TCP_PORT=3888
ORION_PORT_80_TCP_PORT=80
MONGODB_SERVICE_PORT_MONGODB=27017
KUBERNETES_PORT_443_TCP_ADDR=172.30.0.1
ZOOKEEPER_PORT_2888_TCP_ADDR=172.30.222.17
SPARK_MASTER_SVC_SERVICE_PORT_HTTP=80
GREENPLUM_SERVICE_PORT=5432
GREENPLUM_PORT_5432_TCP_PORT=5432
GLUSTERFS_DYNAMIC_85199C48_9E97_11EA_9E39_0050569F5F65_SERVICE_PORT=1
ZOOKEEPER_PORT_3888_TCP=tcp://172.30.222.17:3888
ZOOKEEPER_PORT_3888_TCP_PROTO=tcp
MONGODB_SERVICE_PORT=27017
KAFKA_SERVICE_PORT_TCP_CLIENT=9092
GLUSTERFS_DYNAMIC_C6419348_9FEB_11EA_9E39_0050569F5F65_SERVICE_HOST=172.30.50.211
ZOOKEEPER_SERVICE_PORT_TCP_CLIENT=2181
ZOOKEEPER_SERVICE_PORT_FOLLOWER=2888
KAFKA_SERVICE_PORT=9092
SPARK_MASTER_SVC_PORT_80_TCP_PORT=80
GLUSTERFS_DYNAMIC_C6419348_9FEB_11EA_9E39_0050569F5F65_PORT_1_TCP=tcp://172.30.50.211:1
ORION_SERVICE_HOST=172.30.55.76
KAFKA_PORT_9092_TCP_PROTO=tcp
GLUSTERFS_DYNAMIC_85199C48_9E97_11EA_9E39_0050569F5F65_PORT_1_TCP_PORT=1
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_53_UDP=udp://172.30.0.1:53
KUBERNETES_PORT_53_UDP_ADDR=172.30.0.1
GLUSTERFS_DYNAMIC_243832C0_9F59_11EA_9E39_0050569F5F65_PORT_1_TCP_PORT=1
GLUSTERFS_DYNAMIC_025DF8B3_A642_11EA_9E39_0050569F5F65_PORT_1_TCP_PROTO=tcp
ZOOKEEPER_PORT_3888_TCP_ADDR=172.30.222.17
ZEPPELIN_SERVICE_PORT_HTTP=80
ORION_PORT_80_TCP=tcp://172.30.55.76:80
GREENPLUM_PORT_5432_TCP_PROTO=tcp
SPARK_MASTER_SVC_PORT=tcp://172.30.88.254:7077
GLUSTERFS_DYNAMIC_243832C0_9F59_11EA_9E39_0050569F5F65_SERVICE_HOST=172.30.178.127
MONGODB_SERVICE_HOST=172.30.240.109
GLUSTERFS_DYNAMIC_247E77C4_9F59_11EA_9E39_0050569F5F65_SERVICE_PORT=1
GLUSTERFS_DYNAMIC_247E77C4_9F59_11EA_9E39_0050569F5F65_PORT_1_TCP_ADDR=172.30.245.33
KUBERNETES_PORT_53_TCP_ADDR=172.30.0.1
GLUSTERFS_DYNAMIC_243832C0_9F59_11EA_9E39_0050569F5F65_PORT=tcp://172.30.178.127:1
ORION_PORT=tcp://172.30.55.76:80
GREENPLUM_PORT_5432_TCP_ADDR=172.30.0.147
GLUSTERFS_DYNAMIC_025DF8B3_A642_11EA_9E39_0050569F5F65_PORT_1_TCP=tcp://172.30.229.165:1
GLUSTERFS_DYNAMIC_C6419348_9FEB_11EA_9E39_0050569F5F65_PORT=tcp://172.30.50.211:1
ORION_SERVICE_PORT=80
ORION_PORT_80_TCP_PROTO=tcp
KAFKA_0_EXTERNAL_PORT_9094_TCP=tcp://172.30.235.145:9094
GLUSTERFS_DYNAMIC_3673E93E_9E97_11EA_9E39_0050569F5F65_PORT=tcp://172.30.167.19:1
GLUSTERFS_DYNAMIC_025DF8B3_A642_11EA_9E39_0050569F5F65_SERVICE_PORT=1
GLUSTERFS_DYNAMIC_025DF8B3_A642_11EA_9E39_0050569F5F65_PORT_1_TCP_ADDR=172.30.229.165
GLUSTERFS_DYNAMIC_247E77C4_9F59_11EA_9E39_0050569F5F65_PORT_1_TCP_PROTO=tcp
KAFKA_0_EXTERNAL_SERVICE_PORT_TCP_KAFKA=9094
KAFKA_0_EXTERNAL_PORT_9094_TCP_PROTO=tcp
SPARK_MASTER_SVC_SERVICE_HOST=172.30.88.254
KUBERNETES_SERVICE_PORT_DNS_TCP=53
KUBERNETES_PORT_53_UDP_PORT=53
GLUSTERFS_DYNAMIC_243832C0_9F59_11EA_9E39_0050569F5F65_PORT_1_TCP=tcp://172.30.178.127:1
ZEPPELIN_SERVICE_HOST=172.30.57.29
GLUSTERFS_DYNAMIC_2ECF75EB_A4D2_11EA_9E39_0050569F5F65_SERVICE_PORT=1
SPARK_MASTER_SVC_PORT_80_TCP_ADDR=172.30.88.254
KUBERNETES_PORT=tcp://172.30.0.1:443
ZOOKEEPER_PORT_2181_TCP_PORT=2181
ZOOKEEPER_PORT_2888_TCP_PROTO=tcp
SPARK_MASTER_SVC_SERVICE_PORT=7077
GLUSTERFS_DYNAMIC_247E77C4_9F59_11EA_9E39_0050569F5F65_PORT=tcp://172.30.245.33:1
GLUSTERFS_DYNAMIC_025DF8B3_A642_11EA_9E39_0050569F5F65_PORT=tcp://172.30.229.165:1
GLUSTERFS_DYNAMIC_C6419348_9FEB_11EA_9E39_0050569F5F65_PORT_1_TCP_PORT=1
ZOOKEEPER_SERVICE_PORT_TCP_ELECTION=3888
ZOOKEEPER_PORT=tcp://172.30.222.17:2181
ZOOKEEPER_PORT_2181_TCP_ADDR=172.30.222.17
SPARK_MASTER_SVC_PORT_7077_TCP=tcp://172.30.88.254:7077
KUBERNETES_SERVICE_PORT_DNS=53
KUBERNETES_PORT_443_TCP=tcp://172.30.0.1:443
ZEPPELIN_PORT_80_TCP_PORT=80
KAFKA_0_EXTERNAL_PORT_9094_TCP_PORT=9094
GREENPLUM_SERVICE_HOST=172.30.0.147
GLUSTERFS_DYNAMIC_85199C48_9E97_11EA_9E39_0050569F5F65_PORT=tcp://172.30.117.125:1
GLUSTERFS_DYNAMIC_2ECF75EB_A4D2_11EA_9E39_0050569F5F65_PORT=tcp://172.30.133.154:1
GLUSTERFS_DYNAMIC_2ECF75EB_A4D2_11EA_9E39_0050569F5F65_PORT_1_TCP_ADDR=172.30.133.154
GLUSTERFS_DYNAMIC_3673E93E_9E97_11EA_9E39_0050569F5F65_PORT_1_TCP_ADDR=172.30.167.19
KUBERNETES_PORT_53_TCP_PROTO=tcp
GLUSTERFS_DYNAMIC_243832C0_9F59_11EA_9E39_0050569F5F65_SERVICE_PORT=1
ORION_SERVICE_PORT_HTTP=80
GLUSTERFS_DYNAMIC_3673E93E_9E97_11EA_9E39_0050569F5F65_PORT_1_TCP=tcp://172.30.167.19:1
SPARK_MASTER_SVC_SERVICE_PORT_CLUSTER=7077
KAFKA_SERVICE_HOST=172.30.164.40
GREENPLUM_PORT=tcp://172.30.0.147:5432
GLUSTERFS_DYNAMIC_85199C48_9E97_11EA_9E39_0050569F5F65_PORT_1_TCP_ADDR=172.30.117.125
KUBERNETES_PORT_53_TCP=tcp://172.30.0.1:53
GLUSTERFS_DYNAMIC_243832C0_9F59_11EA_9E39_0050569F5F65_PORT_1_TCP_ADDR=172.30.178.127
ZEPPELIN_PORT=tcp://172.30.57.29:80
KAFKA_PORT=tcp://172.30.164.40:9092
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_53_TCP_PORT=53
SPARK_MASTER_SVC_PORT_80_TCP_PROTO=tcp
KUBERNETES_SERVICE_PORT=443
GLUSTERFS_DYNAMIC_C6419348_9FEB_11EA_9E39_0050569F5F65_SERVICE_PORT=1
GLUSTERFS_DYNAMIC_85199C48_9E97_11EA_9E39_0050569F5F65_SERVICE_HOST=172.30.117.125
MONGODB_PORT_27017_TCP_ADDR=172.30.240.109
GREENPLUM_PORT_5432_TCP=tcp://172.30.0.147:5432
KUBERNETES_PORT_443_TCP_PROTO=tcp
KAFKA_PORT_9092_TCP_ADDR=172.30.164.40
ZOOKEEPER_SERVICE_PORT=2181
ZOOKEEPER_PORT_2181_TCP=tcp://172.30.222.17:2181
ZOOKEEPER_PORT_2888_TCP_PORT=2888
GLUSTERFS_DYNAMIC_2ECF75EB_A4D2_11EA_9E39_0050569F5F65_PORT_1_TCP=tcp://172.30.133.154:1
GLUSTERFS_DYNAMIC_3673E93E_9E97_11EA_9E39_0050569F5F65_SERVICE_HOST=172.30.167.19
Z_VERSION=0.9.0-preview1
LOG_TAG=[ZEPPELIN_0.9.0-preview1]:
Z_HOME=/zeppelin
LANG=en_US.UTF-8
LC_ALL=en_US.UTF-8
ZEPPELIN_ADDR=0.0.0.0
JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
HOME=/
Judging by the values of ENVIRONMENT VARIABLES, you also have a service with a metadata name 'spark-master'. Change this name, example:
apiVersion: v1
kind: Service
metadata:
name: "master-spark-service"
spec:
ports:
- name: spark
port: 7077
targetPort: 7077
selector:
component: "spark-master"
type: ClusterIP
In this case, the kubernetes will not override the port value.
ZEPPELIN_PORT is set by k8s service discovery, because your pod/service name is zeppelin!
Just change pod / service name by something else, or disable discovery env variables, see https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#accessing-the-service, this is just enableServiceLinks: false in your zeppelin pod template definition.

Golang gocql cannot connect to Cassandra (using Docker)

I am trying to setup and connect to a Cassandra single node instance using docker and Golang and it is not working.
The closest information I could find to addressing connection issues between the golang gocql package and Cassandra is available here: Cassandra cqlsh - connection refused, however there are many different upvote answers with no clear indication of which is preferred. It is also a protected question (no "me toos"), so a lot of community members seem to be having trouble with this.
This problem should be slightly different, as it is using Docker and I have tried most (if not all of the solutions linked to above).
version: "3"
services:
cassandra00:
restart: always
image: cassandra:latest
volumes:
- ./db/casdata:/var/lib/cassandra
ports:
- 7000:7000
- 7001:7001
- 7199:7199
- 9042:9042
- 9160:9160
environment:
- CASSANDRA_RPC_ADDRESS=127.0.0.1
- CASSANDRA_BROADCAST_ADDRESS=127.0.0.1
- CASSANDRA_LISTEN_ADDRESS=127.0.0.1
- CASSANDRA_START_RPC=true
db:
restart: always
build: ./db
environment:
POSTGRES_USER: patientplatypus
POSTGRES_PASSWORD: SUPERSECRETFAKEPASSD00T
POSTGRES_DB: zennify
expose:
- "5432"
ports:
- 5432:5432
volumes:
- ./db/pgdata:/var/lib/postgresql/data
app:
restart: always
build:
context: .
dockerfile: Dockerfile
command: bash -c 'while !</dev/tcp/db/5432; do sleep 10; done; realize start --run'
# command: bash -c 'while !</dev/tcp/db/5432; do sleep 10; done; go run main.go'
ports:
- 8000:8000
depends_on:
- db
- cassandra00
links:
- db
- cassandra00
volumes:
- ./:/go/src/github.com/patientplatypus/webserver/
Admittedly, I am a little shaky on what listening addresses I should pass to Cassandra in the environment section, so I just passed 'home':
- CASSANDRA_RPC_ADDRESS=127.0.0.1
- CASSANDRA_BROADCAST_ADDRESS=127.0.0.1
- CASSANDRA_LISTEN_ADDRESS=127.0.0.1
If you try and pass 0.0.0.0 you get the following error:
cassandra00_1 | Exception (org.apache.cassandra.exceptions.ConfigurationException) encountered during startup: listen_address cannot be a wildcard address (0.0.0.0)!
cassandra00_1 | listen_address cannot be a wildcard address (0.0.0.0)!
cassandra00_1 | ERROR [main] 2018-09-10 21:50:44,530 CassandraDaemon.java:708 - Exception encountered during startup: listen_address cannot be a wildcard address (0.0.0.0)!
Overall, however I think that I am getting the correct start up procedure for Cassandra (afaict) because my terminal outputs that Cassandra started up as normal and is listening on the appropriate ports:
cassandra00_1 | INFO [main] 2018-09-10 22:06:28,920 StorageService.java:1446 - JOINING: Finish joining ring
cassandra00_1 | INFO [main] 2018-09-10 22:06:29,179 StorageService.java:2289 - Node /127.0.0.1 state jump to NORMAL
cassandra00_1 | INFO [main] 2018-09-10 22:06:29,607 NativeTransportService.java:70 - Netty using native Epoll event loop
cassandra00_1 | INFO [main] 2018-09-10 22:06:29,750 Server.java:155 - Using Netty Version: [netty-buffer=netty-buffer-4.0.44.Final.452812a, netty-codec=netty-codec-4.0.44.Final.452812a, netty-codec-haproxy=netty-codec-haproxy-4.0.44.Final.452812a, netty-codec-http=netty-codec-http-4.0.44.Final.452812a, netty-codec-socks=netty-codec-socks-4.0.44.Final.452812a, netty-common=netty-common-4.0.44.Final.452812a, netty-handler=netty-handler-4.0.44.Final.452812a, netty-tcnative=netty-tcnative-1.1.33.Fork26.142ecbb, netty-transport=netty-transport-4.0.44.Final.452812a, netty-transport-native-epoll=netty-transport-native-epoll-4.0.44.Final.452812a, netty-transport-rxtx=netty-transport-rxtx-4.0.44.Final.452812a, netty-transport-sctp=netty-transport-sctp-4.0.44.Final.452812a, netty-transport-udt=netty-transport-udt-4.0.44.Final.452812a]
cassandra00_1 | INFO [main] 2018-09-10 22:06:29,754 Server.java:156 - Starting listening for CQL clients on /127.0.0.1:9042 (unencrypted)...
cassandra00_1 | INFO [main] 2018-09-10 22:06:29,990 ThriftServer.java:116 - Binding thrift service to /127.0.0.1:9160
In my golang code I have the following package that is being called (simplified to show relevant section):
package data
import(
"fmt"
"github.com/gocql/gocql"
)
func create_userinfo_table() {
<...>
fmt.Println("replicating table in cassandra")
cluster := gocql.NewCluster("localhost") //<---error here!
cluster.ProtoVersion = 4
<...>
}
Which results in the following error in my terminal:
app_1 | [21:52:38][WEBSERVER] : 2018/09/10
21:52:38 gocql: unable to dial control conn 127.0.0.1:
dial tcp 127.0.0.1:9042: connect: connection refused
app_1 | [21:52:38][WEBSERVER] : 2018/09/10
21:52:38 gocql: unable to dial control conn ::1:
dial tcp [::1]:9042: connect: cannot assign requested address
app_1 | [21:52:38][WEBSERVER] : 2018/09/10
21:52:38 Could not connect to cassandra cluster: gocql:
unable to create session: control: unable to connect to initial hosts:
dial tcp [::1]:9042: connect: cannot assign requested address
I have tried several variations on the connection address
cluster := gocql.NewCluster("localhost")
cluster := gocql.NewCluster("127.0.0.1")
cluster := gocql.NewCluster("127.0.0.1:9042")
cluster := gocql.NewCluster("127.0.0.1:9160")
These seemed likely candidates for example, but no luck.
Does anyone have any idea what I am doing wrong?
Use the service name cassandra00 for the hostname per the docker-compose documentation https://docs.docker.com/compose/compose-file/#links
Containers for the linked service are reachable at a hostname identical to the alias, or the service name if no alias was specified.
Leave the CASSANDRA_LISTEN_ADDRESS envvar unset (or pass auto) per https://docs.docker.com/samples/library/cassandra/
The default value is auto, which will set the listen_address option in cassandra.yaml to the IP address of the container as it starts. This default should work in most use cases.

Mongo transation exception when using the latest Spring Data Mongo reactive

When I tried the transaction feature with Mongo 4 and the latest Spring Data Mongo Reactive, I got the failure like this.
18:57:22.823 [main] ERROR org.mongodb.driver.client - Callback onResult call produced an error
reactor.core.Exceptions$ErrorCallbackNotImplemented: com.mongodb.MongoClientException: Sessions are not supported by the MongoDB cluster to which this client is connected
Caused by: com.mongodb.MongoClientException: Sessions are not supported by the MongoDB cluster to which this client is connected
at com.mongodb.async.client.MongoClientImpl$1.onResult(MongoClientImpl.java:90)
at com.mongodb.async.client.MongoClientImpl$1.onResult(MongoClientImpl.java:83)
at com.mongodb.async.client.ClientSessionHelper$2.onResult(ClientSessionHelper.java:80)
at com.mongodb.async.client.ClientSessionHelper$2.onResult(ClientSessionHelper.java:73)
at com.mongodb.internal.connection.BaseCluster$ServerSelectionRequest.onResult(BaseCluster.java:433)
at com.mongodb.internal.connection.BaseCluster.handleServerSelectionRequest(BaseCluster.java:297)
at com.mongodb.internal.connection.BaseCluster.selectServerAsync(BaseCluster.java:157)
at com.mongodb.internal.connection.SingleServerCluster.selectServerAsync(SingleServerCluster.java:41)
at com.mongodb.async.client.ClientSessionHelper.createClientSession(ClientSessionHelper.java:68)
at com.mongodb.async.client.MongoClientImpl.startSession(MongoClientImpl.java:83)
at com.mongodb.reactivestreams.client.internal.MongoClientImpl$1.apply(MongoClientImpl.java:153)
at com.mongodb.reactivestreams.client.internal.MongoClientImpl$1.apply(MongoClientImpl.java:150)
at com.mongodb.async.client.SingleResultCallbackSubscription.requestInitialData(SingleResultCallbackSubscription.java:38)
at com.mongodb.async.client.AbstractSubscription.tryRequestInitialData(AbstractSubscription.java:153)
at com.mongodb.async.client.AbstractSubscription.request(AbstractSubscription.java:84)
at com.mongodb.reactivestreams.client.internal.ObservableToPublisher$1$1.request(ObservableToPublisher.java:50)
at reactor.core.publisher.MonoNext$NextSubscriber.request(MonoNext.java:102)
at reactor.core.publisher.MonoProcessor.onSubscribe(MonoProcessor.java:399)
at reactor.core.publisher.MonoNext$NextSubscriber.onSubscribe(MonoNext.java:64)
at com.mongodb.reactivestreams.client.internal.ObservableToPublisher$1.onSubscribe(ObservableToPublisher.java:39)
at com.mongodb.async.client.SingleResultCallbackSubscription.<init>(SingleResultCallbackSubscription.java:33)
at com.mongodb.async.client.Observables$2.subscribe(Observables.java:76)
at com.mongodb.reactivestreams.client.internal.ObservableToPublisher.subscribe(ObservableToPublisher.java:36)
at reactor.core.publisher.MonoFromPublisher.subscribe(MonoFromPublisher.java:43)
at reactor.core.publisher.Mono.subscribe(Mono.java:3555)
at reactor.core.publisher.MonoProcessor.add(MonoProcessor.java:531)
at reactor.core.publisher.MonoProcessor.subscribe(MonoProcessor.java:444)
at reactor.core.publisher.MonoFlatMapMany.subscribe(MonoFlatMapMany.java:49)
at reactor.core.publisher.Flux.subscribe(Flux.java:7677)
at reactor.core.publisher.Flux.subscribeWith(Flux.java:7841)
at reactor.core.publisher.Flux.subscribe(Flux.java:7670)
at reactor.core.publisher.Flux.subscribe(Flux.java:7634)
at com.example.demo.DataInitializer.init(DataInitializer.java:42)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.context.event.ApplicationListenerMethodAdapter.doInvoke(ApplicationListenerMethodAdapter.java:261)
at org.springframework.context.event.ApplicationListenerMethodAdapter.processEvent(ApplicationListenerMethodAdapter.java:180)
at org.springframework.context.event.ApplicationListenerMethodAdapter.onApplicationEvent(ApplicationListenerMethodAdapter.java:142)
at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:172)
at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:165)
at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:139)
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:398)
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:355)
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:884)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:551)
at org.springframework.context.annotation.AnnotationConfigApplicationContext.<init>(AnnotationConfigApplicationContext.java:88)
at com.example.demo.Application.main(Application.java:24)
I used a initialization class to initialize this class.
#Component
#Slf4j
class DataInitializer {
private final ReactiveMongoOperations mongoTemplate;
public DataInitializer(ReactiveMongoOperations mongoTemplate) {
this.mongoTemplate = mongoTemplate;
}
#EventListener(value = ContextRefreshedEvent.class)
public void init() {
log.info("start data initialization ...");
this.mongoTemplate.inTransaction()
.execute(
s ->
Flux
.just("Post one", "Post two")
.flatMap(
title -> s.insert(Post.builder().title(title).content("content of " + title).build())
)
)
.subscribe(
null,
null,
() -> log.info("done data initialization...")
);
}
}
The subscribe caused this exception.
The source code is pushed to my github.
I just replace the content of DataInitializer with the new mongoTemplate.inTransaction().
PS: I used the latest Mongo in a Docker container to serve the mongodb service, at the moment, it was 4.0.1. The Docker console shows:
mongodb_1 | 2018-08-20T15:56:04.434+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
mongodb_1 | 2018-08-20T15:56:04.447+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=12635c1c3d2d
mongodb_1 | 2018-08-20T15:56:04.447+0000 I CONTROL [initandlisten] db version v4.0.1
mongodb_1 | 2018-08-20T15:56:04.448+0000 I CONTROL [initandlisten] git version: 54f1582fc6eb01de4d4c42f26fc133e623f065fb
UPDATE: When I tried to start up Mongo servers as Replica Set via a Docker Compose file:
version: "3"
services:
mongo1:
hostname: mongo1
container_name: localmongo1
image: mongo:4.0-xenial
ports:
- "27017:27017"
restart: always
entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0" ]
mongo2:
hostname: mongo2
container_name: localmongo2
image: mongo:4.0-xenial
ports:
- "27018:27017"
restart: always
entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0" ]
mongo3:
hostname: mongo3
container_name: localmongo3
image: mongo:4.0-xenial
ports:
- "27019:27017"
restart: always
entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0" ]
And change the Mongo uri string to:
mongodb://localhost:27017,localhost:27018,localhost:27019/blog
Then got the failure info like:
11:08:20.845 [main] INFO org.mongodb.driver.cluster - No server chosen by com.mongodb.async.client.ClientSessionHelper$1#796d3c9f from cluster description ClusterDescription{type=UNKNOWN,

cannot create JDBC datasource named transactional_DS while implementing Multi-instance in moqui using docker

As Multi-Tenant Functionality for Moqui Framework 2.0.0 has been removed, I am trying to implement same with Docker.
I just created image using-
$ ./docker-build.sh
Modified- moqui-ng-my-compose.yml
./compose-run.sh moqui-ng-my-compose.yml
Exception occurred: | 08:07:47.864 INFO main .moqui.i.c.TransactionInternalBitronix Initializing DataSource transactional_DS (mysql) with properties: [uri:jdbc:mysql://127.0.0.1:3306/moquitest_20161126?autoReconnect=true&useUnicode=true&characterEncoding=UTF-8, user:root]
moqui-server | 08:07:51.868 ERROR main o.moqui.i.w.MoquiContextListener Error initializing webapp context: bitronix.tm.resource.ResourceConfigurationException: cannot create JDBC datasource named transactional_DS
moqui-server | bitronix.tm.resource.ResourceConfigurationException: cannot create JDBC datasource named transactional_DS
moqui-server | at bitronix.tm.resource.jdbc.PoolingDataSource.init(PoolingDataSource.java:91) ~[btm-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
moqui-server | at org.moqui.impl.context.TransactionInternalBitronix.getDataSource(TransactionInternalBitronix.groovy:129) ~[moqui-framework-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
moqui-server | at org.moqui.impl.entity.EntityDatasourceFactoryImpl.init(EntityDatasourceFactoryImpl.groovy:84) ~[moqui-framework-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
moqui-server | at org.moqui.impl.entity.EntityFacadeImpl.initAllDatasources(EntityFacadeImpl.groovy:193) ~[moqui-framework-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
moqui-server | at org.moqui.impl.entity.EntityFacadeImpl.<init>(EntityFacadeImpl.groovy:120) ~[moqui-framework-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
moqui-server | at org.moqui.impl.context.ExecutionContextFactoryImpl.<init>(ExecutionContextFactoryImpl.groovy:198) ~[moqui-framework-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
Here is my moqui-ng-my-compose.yml file-
version: "2"
services:
nginx-proxy:
# For documentation on SSL and other settings see:
# https://github.com/jwilder/nginx-proxy
image: jwilder/nginx-proxy
container_name: nginx-proxy
restart: unless-stopped
ports:
- 80:80
# - 443:443
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
# - /path/to/certs:/etc/nginx/certs
moqui-server:
image: moqui
container_name: moqui-server
command: conf=conf/MoquiDevConf.xml
restart: unless-stopped
links:
- mysql-moqui
volumes:
- ./runtime/conf:/opt/moqui/runtime/conf
- ./runtime/lib:/opt/moqui/runtime/lib
- ./runtime/classes:/opt/moqui/runtime/classes
- ./runtime/ecomponent:/opt/moqui/runtime/ecomponent
- ./runtime/log:/opt/moqui/runtime/log
- ./runtime/txlog:/opt/moqui/runtime/txlog
# this one isn't needed: - ./runtime/db:/opt/moqui/runtime/db
- ./runtime/elasticsearch:/opt/moqui/runtime/elasticsearch
environment:
- entity_ds_db_conf=mysql
- entity_ds_host=localhost
- entity_ds_port=3306
- entity_ds_database=moquitest_20161126
- entity_ds_user=root
- entity_ds_password=123456
# CHANGE ME - note that VIRTUAL_HOST is for nginx-proxy so it picks up this container as one it should reverse proxy
- VIRTUAL_HOST=app.visvendra.hyd.company.com
- webapp_http_host=app.visvendra.hyd.company.com
- webapp_http_port=80
# - webapp_https_port=443
# - webapp_https_enabled=true
mysql-moqui:
image: mysql:5.7
container_name: mysql-moqui
restart: unless-stopped
# uncomment this to expose the port for use outside other containers
# ports:
# - 3306:3306
# edit these as needed to map configuration and data storage
volumes:
- ./db/mysql/data:/var/lib/mysql
# - /my/mysql/conf.d:/etc/mysql/conf.d
environment:
- MYSQL_ROOT_PASSWORD=123456
- MYSQL_DATABASE=moquitest_20161126
- MYSQL_USER=root
- MYSQL_PASSWORD=123456
Please let me know if any other information required.
Thanks in advance!!

bosh deploy error Error 190014

my bosh version is 1.3232.0
my platform is vsphere, i search the google and bosh site, it may relate to the cloud-config opt-in. but i have no idea anymore.
I create own mongodb release, when Upload the manifest, it throws Error 190014
Director task 163
Started preparing deployment > Preparing deployment. Failed: Deployment manifest should not contain cloud config properties: ["compilation", "networks", "resource_pools"] (00:00:00)
Error 190014: Deployment manifest should not contain cloud config properties: ["compilation", "networks", "resource_pools"]
my manifest is :
---
name: mongodb3
director_uuid: d3df0341-4aeb-4706-940b-6f4681090af8
releases:
- name: mongodb
version: latest
compilation:
workers: 1
reuse_compilation_vms: false
network: default
cloud_properties:
cpu: 4
datacenters:
- clusters:
- cf_z2:
resource_pool: mongodb
name: cf_z2
disk: 20480
ram: 4096
update:
canaries: 1
canary_watch_time: 15000-30000
update_watch_time: 15000-30000
max_in_flight: 1
networks:
- name: default
type: manual
subnets:
- cloud_properties:
name: VM Network
range: 10.62.90.133/25
gateway: 10.62.90.129
static:
- 10.62.90.140
reserved:
- 10.62.90.130 - 10.62.90.139
- 10.62.90.151 - 10.62.90.254
dns:
- 10.254.174.10
- 10.104.128.235
resource_pools:
- cloud_properties:
cpu: 2
datacenters:
- clusters:
- cf_z2:
resource_pool: mongodb
name: cf
disk: 10480
ram: 4096
name: mongodb3
network: default
stemcell:
name: bosh-vsphere-esxi-ubuntu-trusty-go_agent
version: latest
jobs:
- name: mongodb3
instances: 1
templates:
- {name: mongodb3, release: mongodb3}
persistent_disk: 10_240
resource_pools: mongodb3
networks:
- name: default
solved, these parts should be put in an single file, and deploy to bosh

Resources