Mongo transation exception when using the latest Spring Data Mongo reactive - spring-data-mongodb

When I tried the transaction feature with Mongo 4 and the latest Spring Data Mongo Reactive, I got the failure like this.
18:57:22.823 [main] ERROR org.mongodb.driver.client - Callback onResult call produced an error
reactor.core.Exceptions$ErrorCallbackNotImplemented: com.mongodb.MongoClientException: Sessions are not supported by the MongoDB cluster to which this client is connected
Caused by: com.mongodb.MongoClientException: Sessions are not supported by the MongoDB cluster to which this client is connected
at com.mongodb.async.client.MongoClientImpl$1.onResult(MongoClientImpl.java:90)
at com.mongodb.async.client.MongoClientImpl$1.onResult(MongoClientImpl.java:83)
at com.mongodb.async.client.ClientSessionHelper$2.onResult(ClientSessionHelper.java:80)
at com.mongodb.async.client.ClientSessionHelper$2.onResult(ClientSessionHelper.java:73)
at com.mongodb.internal.connection.BaseCluster$ServerSelectionRequest.onResult(BaseCluster.java:433)
at com.mongodb.internal.connection.BaseCluster.handleServerSelectionRequest(BaseCluster.java:297)
at com.mongodb.internal.connection.BaseCluster.selectServerAsync(BaseCluster.java:157)
at com.mongodb.internal.connection.SingleServerCluster.selectServerAsync(SingleServerCluster.java:41)
at com.mongodb.async.client.ClientSessionHelper.createClientSession(ClientSessionHelper.java:68)
at com.mongodb.async.client.MongoClientImpl.startSession(MongoClientImpl.java:83)
at com.mongodb.reactivestreams.client.internal.MongoClientImpl$1.apply(MongoClientImpl.java:153)
at com.mongodb.reactivestreams.client.internal.MongoClientImpl$1.apply(MongoClientImpl.java:150)
at com.mongodb.async.client.SingleResultCallbackSubscription.requestInitialData(SingleResultCallbackSubscription.java:38)
at com.mongodb.async.client.AbstractSubscription.tryRequestInitialData(AbstractSubscription.java:153)
at com.mongodb.async.client.AbstractSubscription.request(AbstractSubscription.java:84)
at com.mongodb.reactivestreams.client.internal.ObservableToPublisher$1$1.request(ObservableToPublisher.java:50)
at reactor.core.publisher.MonoNext$NextSubscriber.request(MonoNext.java:102)
at reactor.core.publisher.MonoProcessor.onSubscribe(MonoProcessor.java:399)
at reactor.core.publisher.MonoNext$NextSubscriber.onSubscribe(MonoNext.java:64)
at com.mongodb.reactivestreams.client.internal.ObservableToPublisher$1.onSubscribe(ObservableToPublisher.java:39)
at com.mongodb.async.client.SingleResultCallbackSubscription.<init>(SingleResultCallbackSubscription.java:33)
at com.mongodb.async.client.Observables$2.subscribe(Observables.java:76)
at com.mongodb.reactivestreams.client.internal.ObservableToPublisher.subscribe(ObservableToPublisher.java:36)
at reactor.core.publisher.MonoFromPublisher.subscribe(MonoFromPublisher.java:43)
at reactor.core.publisher.Mono.subscribe(Mono.java:3555)
at reactor.core.publisher.MonoProcessor.add(MonoProcessor.java:531)
at reactor.core.publisher.MonoProcessor.subscribe(MonoProcessor.java:444)
at reactor.core.publisher.MonoFlatMapMany.subscribe(MonoFlatMapMany.java:49)
at reactor.core.publisher.Flux.subscribe(Flux.java:7677)
at reactor.core.publisher.Flux.subscribeWith(Flux.java:7841)
at reactor.core.publisher.Flux.subscribe(Flux.java:7670)
at reactor.core.publisher.Flux.subscribe(Flux.java:7634)
at com.example.demo.DataInitializer.init(DataInitializer.java:42)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.context.event.ApplicationListenerMethodAdapter.doInvoke(ApplicationListenerMethodAdapter.java:261)
at org.springframework.context.event.ApplicationListenerMethodAdapter.processEvent(ApplicationListenerMethodAdapter.java:180)
at org.springframework.context.event.ApplicationListenerMethodAdapter.onApplicationEvent(ApplicationListenerMethodAdapter.java:142)
at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:172)
at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:165)
at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:139)
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:398)
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:355)
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:884)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:551)
at org.springframework.context.annotation.AnnotationConfigApplicationContext.<init>(AnnotationConfigApplicationContext.java:88)
at com.example.demo.Application.main(Application.java:24)
I used a initialization class to initialize this class.
#Component
#Slf4j
class DataInitializer {
private final ReactiveMongoOperations mongoTemplate;
public DataInitializer(ReactiveMongoOperations mongoTemplate) {
this.mongoTemplate = mongoTemplate;
}
#EventListener(value = ContextRefreshedEvent.class)
public void init() {
log.info("start data initialization ...");
this.mongoTemplate.inTransaction()
.execute(
s ->
Flux
.just("Post one", "Post two")
.flatMap(
title -> s.insert(Post.builder().title(title).content("content of " + title).build())
)
)
.subscribe(
null,
null,
() -> log.info("done data initialization...")
);
}
}
The subscribe caused this exception.
The source code is pushed to my github.
I just replace the content of DataInitializer with the new mongoTemplate.inTransaction().
PS: I used the latest Mongo in a Docker container to serve the mongodb service, at the moment, it was 4.0.1. The Docker console shows:
mongodb_1 | 2018-08-20T15:56:04.434+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
mongodb_1 | 2018-08-20T15:56:04.447+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=12635c1c3d2d
mongodb_1 | 2018-08-20T15:56:04.447+0000 I CONTROL [initandlisten] db version v4.0.1
mongodb_1 | 2018-08-20T15:56:04.448+0000 I CONTROL [initandlisten] git version: 54f1582fc6eb01de4d4c42f26fc133e623f065fb
UPDATE: When I tried to start up Mongo servers as Replica Set via a Docker Compose file:
version: "3"
services:
mongo1:
hostname: mongo1
container_name: localmongo1
image: mongo:4.0-xenial
ports:
- "27017:27017"
restart: always
entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0" ]
mongo2:
hostname: mongo2
container_name: localmongo2
image: mongo:4.0-xenial
ports:
- "27018:27017"
restart: always
entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0" ]
mongo3:
hostname: mongo3
container_name: localmongo3
image: mongo:4.0-xenial
ports:
- "27019:27017"
restart: always
entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0" ]
And change the Mongo uri string to:
mongodb://localhost:27017,localhost:27018,localhost:27019/blog
Then got the failure info like:
11:08:20.845 [main] INFO org.mongodb.driver.cluster - No server chosen by com.mongodb.async.client.ClientSessionHelper$1#796d3c9f from cluster description ClusterDescription{type=UNKNOWN,

Related

(MongoSocketOpenException): Exception opening socket | Unknown host: wisdb1

I tried to connect MongoDb replica-set (3 nodes) with docker.
This is my docker-compose file, i renamed all services from e.g. "mongo1" to "mongoa1" because i have a second app with the same config file:
version: "3.8"
services:
mongoa1:
image: mongo:4
container_name: mongoa1
command: ["--replSet", "my-replica-set", "--bind_ip_all", "--port", "30001"]
volumes:
- ./data/mongoa-1:/data/db
ports:
- 30001:30001
healthcheck:
test: test $$(echo "rs.initiate({_id:'my-replica-set',members:[{_id:0,host:\"mongoa1:30001\"},{_id:1,host:\"mongoa2:30002\"},{_id:2,host:\"mongoa3:30003\"}]}).ok || rs.status().ok" | mongo --port 30001 --quiet) -eq 1
interval: 10s
start_period: 30s
mongoa2:
image: mongo:4
container_name: mongoa2
command: ["--replSet", "my-replica-set", "--bind_ip_all", "--port", "30002"]
volumes:
- ./data/mongoa-2:/data/db
ports:
- 30002:30002
mongoa3:
image: mongo:4
container_name: mongoa3
command: ["--replSet", "my-replica-set", "--bind_ip_all", "--port", "30003"]
volumes:
- ./data/mongoa-3:/data/db
ports:
- 30003:30003
Container are running.
❯ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fb1fcab13804 mongo:4 "docker-entrypoint.s…" 3 minutes ago Up 3 minutes 27017/tcp, 0.0.0.0:30003->30003/tcp mongoa3
72f8cfe217a5 mongo:4 "docker-entrypoint.s…" 3 minutes ago Up 3 minutes (healthy) 27017/tcp, 0.0.0.0:30001->30001/tcp mongoa1
2a61246f5d17 mongo:4 "docker-entrypoint.s…" 3 minutes ago Up 3 minutes 27017/tcp, 0.0.0.0:30002->30002/tcp mongoa2
I want to open Studio T3 but i get the following error:
Db path: mongodb://mongoa1:30001,mongoa2:30002,mongoa3:30003/app?replicaSet=my-replica-set
Connection failed.
SERVER [mongoa1:30001] (Type: UNKNOWN)
|_/ Connection error (MongoSocketOpenException): Exception opening socket
|____/ Unknown host: mongoa1
SERVER [mongoa2:30002] (Type: UNKNOWN)
|_/ Connection error (MongoSocketOpenException): Exception opening socket
|____/ Unknown host: mongoa2
SERVER [mongoa3:30003] (Type: UNKNOWN)
|_/ Connection error (MongoSocketOpenException): Exception opening socket
|____/ Unknown host: mongoa3
Details:
Timed out after 30000 ms while waiting for a server that matches com.mongodb.client.internal.MongoClientDelegate$1#4d84ad57. Client view of cluster state is {type=REPLICA_SET, servers=[{address=mongoa1:30001, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.UnknownHostException: mongoa1}}, {address=mongoa2:30002, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.UnknownHostException: mongoa2}}, {address=mongoa3:30003, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.UnknownHostException: mongoa3}}]
I don't understand what's wrong. When i rename back to "mongoa1" to "mongo1" it works. But i always have to delete the other docker app and i don't want. Whats wrong in my config?

How to fix 'pod has unbound immediate PersistentVolumeClaims' in SQL 2019 Big Data?

I want to set up simple persistent storage on Kubernetes for SQL 2019 Big Data on prem. But keeps throwing an event 'pod has unbound immediate PersistentVolumeClaims'.
When I deploy the image, pod mssql-controller shows an event:
Name: mssql-controller-6vd8b
Namespace: sql2019
Priority: 0
PriorityClassName: <none>
Node: <none>
Labels: MSSQL_CLUSTER=sql2019
app=mssql-controller
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/mssql-controller
Containers:
mssql-controller:
Image: private-repo.microsoft.com/mssql-private-preview/mssql-controller:latest
Port: 8081/TCP
Host Port: 0/TCP
Environment:
ACCEPT_EULA: Y
CONTROLLER_ENABLE_TDS_PROXY: false
KUBERNETES_NAMESPACE: sql2019 (v1:metadata.namespace)
Mounts:
/root/secrets/controller-db from controller-db-secret (ro)
/root/secrets/controller-login from controller-login-secret (ro)
/root/secrets/knox from controller-knox-secret (ro)
/root/secrets/node-admin-login from node-admin-login-secret (ro)
/var/opt from controller-storage (rw)
/var/opt/controller/config from controller-config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from sa-mssql-controller-token-4fsbc (ro)
mssql-portal:
Image: private-repo.microsoft.com/mssql-private-preview/mssql-portal:latest
Port: 6001/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/var/opt from controller-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from sa-mssql-controller-token-4fsbc (ro)
mssql-server-controller:
Image: private-repo.microsoft.com/mssql-private-preview/mssql-server-controller:latest
Port: 1433/TCP
Host Port: 0/TCP
Environment:
ACCEPT_EULA: Y
SA_PASSWORD: <password removed>
Mounts:
/var/opt from controller-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from sa-mssql-controller-token-4fsbc (ro)
mssql-monitor-fluentbit:
Image: private-repo.microsoft.com/mssql-private-preview/mssql-monitor-fluentbit:latest
Port: 2020/TCP
Host Port: 0/TCP
Limits:
memory: 100Mi
Requests:
cpu: 100m
memory: 100Mi
Environment:
FLUENT_ELASTICSEARCH_HOST: service-monitor-elasticsearch
FLUENT_ELASTICSEARCH_PORT: 9200
FLUENTBIT_CONFIG: fluentbit-controller.conf
KUBERNETES_NAMESPACE: sql2019 (v1:metadata.namespace)
KUBERNETES_NODE_NAME: (v1:spec.nodeName)
KUBERNETES_POD_NAME: mssql-controller-6vd8b (v1:metadata.name)
Mounts:
/var/opt from controller-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from sa-mssql-controller-token-4fsbc (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
controller-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mssql-controller-pvc
ReadOnly: false
controller-login-secret:
Type: Secret (a volume populated by a Secret)
SecretName: controller-login-secret
Optional: false
controller-db-secret:
Type: Secret (a volume populated by a Secret)
SecretName: controller-db-secret
Optional: false
controller-knox-secret:
Type: Secret (a volume populated by a Secret)
SecretName: controller-knox-secret
Optional: false
node-admin-login-secret:
Type: Secret (a volume populated by a Secret)
SecretName: node-admin-login-secret
Optional: false
controller-config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: mssql-controller-config
Optional: false
sa-mssql-controller-token-4fsbc:
Type: Secret (a volume populated by a Secret)
SecretName: sa-mssql-controller-token-4fsbc
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 1s (x6 over 4s) default-scheduler pod has unbound immediate PersistentVolumeClaims
Cluster config:
export USE_PERSISTENT_VOLUME=true
export STORAGE_CLASS_NAME=slow
StorageClass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
replication-type: none
I'm quite new to Kubernetes, but these are the things I know:
I need Dynamic Volume Provisioning
I need to create Storage Class (SQL 2019 creates Persistent Volume and Persistent Volume Claim)
In my case it occured while using minikube. I believe it can occur even on any cloud provider.
The reason for this is because one or more PVCs may have their storage specification request more disk space than is available.
Check your
...
resources:
requests:
storage: <value must be <= than available space>
I hope this helps someone.

Golang gocql cannot connect to Cassandra (using Docker)

I am trying to setup and connect to a Cassandra single node instance using docker and Golang and it is not working.
The closest information I could find to addressing connection issues between the golang gocql package and Cassandra is available here: Cassandra cqlsh - connection refused, however there are many different upvote answers with no clear indication of which is preferred. It is also a protected question (no "me toos"), so a lot of community members seem to be having trouble with this.
This problem should be slightly different, as it is using Docker and I have tried most (if not all of the solutions linked to above).
version: "3"
services:
cassandra00:
restart: always
image: cassandra:latest
volumes:
- ./db/casdata:/var/lib/cassandra
ports:
- 7000:7000
- 7001:7001
- 7199:7199
- 9042:9042
- 9160:9160
environment:
- CASSANDRA_RPC_ADDRESS=127.0.0.1
- CASSANDRA_BROADCAST_ADDRESS=127.0.0.1
- CASSANDRA_LISTEN_ADDRESS=127.0.0.1
- CASSANDRA_START_RPC=true
db:
restart: always
build: ./db
environment:
POSTGRES_USER: patientplatypus
POSTGRES_PASSWORD: SUPERSECRETFAKEPASSD00T
POSTGRES_DB: zennify
expose:
- "5432"
ports:
- 5432:5432
volumes:
- ./db/pgdata:/var/lib/postgresql/data
app:
restart: always
build:
context: .
dockerfile: Dockerfile
command: bash -c 'while !</dev/tcp/db/5432; do sleep 10; done; realize start --run'
# command: bash -c 'while !</dev/tcp/db/5432; do sleep 10; done; go run main.go'
ports:
- 8000:8000
depends_on:
- db
- cassandra00
links:
- db
- cassandra00
volumes:
- ./:/go/src/github.com/patientplatypus/webserver/
Admittedly, I am a little shaky on what listening addresses I should pass to Cassandra in the environment section, so I just passed 'home':
- CASSANDRA_RPC_ADDRESS=127.0.0.1
- CASSANDRA_BROADCAST_ADDRESS=127.0.0.1
- CASSANDRA_LISTEN_ADDRESS=127.0.0.1
If you try and pass 0.0.0.0 you get the following error:
cassandra00_1 | Exception (org.apache.cassandra.exceptions.ConfigurationException) encountered during startup: listen_address cannot be a wildcard address (0.0.0.0)!
cassandra00_1 | listen_address cannot be a wildcard address (0.0.0.0)!
cassandra00_1 | ERROR [main] 2018-09-10 21:50:44,530 CassandraDaemon.java:708 - Exception encountered during startup: listen_address cannot be a wildcard address (0.0.0.0)!
Overall, however I think that I am getting the correct start up procedure for Cassandra (afaict) because my terminal outputs that Cassandra started up as normal and is listening on the appropriate ports:
cassandra00_1 | INFO [main] 2018-09-10 22:06:28,920 StorageService.java:1446 - JOINING: Finish joining ring
cassandra00_1 | INFO [main] 2018-09-10 22:06:29,179 StorageService.java:2289 - Node /127.0.0.1 state jump to NORMAL
cassandra00_1 | INFO [main] 2018-09-10 22:06:29,607 NativeTransportService.java:70 - Netty using native Epoll event loop
cassandra00_1 | INFO [main] 2018-09-10 22:06:29,750 Server.java:155 - Using Netty Version: [netty-buffer=netty-buffer-4.0.44.Final.452812a, netty-codec=netty-codec-4.0.44.Final.452812a, netty-codec-haproxy=netty-codec-haproxy-4.0.44.Final.452812a, netty-codec-http=netty-codec-http-4.0.44.Final.452812a, netty-codec-socks=netty-codec-socks-4.0.44.Final.452812a, netty-common=netty-common-4.0.44.Final.452812a, netty-handler=netty-handler-4.0.44.Final.452812a, netty-tcnative=netty-tcnative-1.1.33.Fork26.142ecbb, netty-transport=netty-transport-4.0.44.Final.452812a, netty-transport-native-epoll=netty-transport-native-epoll-4.0.44.Final.452812a, netty-transport-rxtx=netty-transport-rxtx-4.0.44.Final.452812a, netty-transport-sctp=netty-transport-sctp-4.0.44.Final.452812a, netty-transport-udt=netty-transport-udt-4.0.44.Final.452812a]
cassandra00_1 | INFO [main] 2018-09-10 22:06:29,754 Server.java:156 - Starting listening for CQL clients on /127.0.0.1:9042 (unencrypted)...
cassandra00_1 | INFO [main] 2018-09-10 22:06:29,990 ThriftServer.java:116 - Binding thrift service to /127.0.0.1:9160
In my golang code I have the following package that is being called (simplified to show relevant section):
package data
import(
"fmt"
"github.com/gocql/gocql"
)
func create_userinfo_table() {
<...>
fmt.Println("replicating table in cassandra")
cluster := gocql.NewCluster("localhost") //<---error here!
cluster.ProtoVersion = 4
<...>
}
Which results in the following error in my terminal:
app_1 | [21:52:38][WEBSERVER] : 2018/09/10
21:52:38 gocql: unable to dial control conn 127.0.0.1:
dial tcp 127.0.0.1:9042: connect: connection refused
app_1 | [21:52:38][WEBSERVER] : 2018/09/10
21:52:38 gocql: unable to dial control conn ::1:
dial tcp [::1]:9042: connect: cannot assign requested address
app_1 | [21:52:38][WEBSERVER] : 2018/09/10
21:52:38 Could not connect to cassandra cluster: gocql:
unable to create session: control: unable to connect to initial hosts:
dial tcp [::1]:9042: connect: cannot assign requested address
I have tried several variations on the connection address
cluster := gocql.NewCluster("localhost")
cluster := gocql.NewCluster("127.0.0.1")
cluster := gocql.NewCluster("127.0.0.1:9042")
cluster := gocql.NewCluster("127.0.0.1:9160")
These seemed likely candidates for example, but no luck.
Does anyone have any idea what I am doing wrong?
Use the service name cassandra00 for the hostname per the docker-compose documentation https://docs.docker.com/compose/compose-file/#links
Containers for the linked service are reachable at a hostname identical to the alias, or the service name if no alias was specified.
Leave the CASSANDRA_LISTEN_ADDRESS envvar unset (or pass auto) per https://docs.docker.com/samples/library/cassandra/
The default value is auto, which will set the listen_address option in cassandra.yaml to the IP address of the container as it starts. This default should work in most use cases.

cannot create JDBC datasource named transactional_DS while implementing Multi-instance in moqui using docker

As Multi-Tenant Functionality for Moqui Framework 2.0.0 has been removed, I am trying to implement same with Docker.
I just created image using-
$ ./docker-build.sh
Modified- moqui-ng-my-compose.yml
./compose-run.sh moqui-ng-my-compose.yml
Exception occurred: | 08:07:47.864 INFO main .moqui.i.c.TransactionInternalBitronix Initializing DataSource transactional_DS (mysql) with properties: [uri:jdbc:mysql://127.0.0.1:3306/moquitest_20161126?autoReconnect=true&useUnicode=true&characterEncoding=UTF-8, user:root]
moqui-server | 08:07:51.868 ERROR main o.moqui.i.w.MoquiContextListener Error initializing webapp context: bitronix.tm.resource.ResourceConfigurationException: cannot create JDBC datasource named transactional_DS
moqui-server | bitronix.tm.resource.ResourceConfigurationException: cannot create JDBC datasource named transactional_DS
moqui-server | at bitronix.tm.resource.jdbc.PoolingDataSource.init(PoolingDataSource.java:91) ~[btm-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
moqui-server | at org.moqui.impl.context.TransactionInternalBitronix.getDataSource(TransactionInternalBitronix.groovy:129) ~[moqui-framework-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
moqui-server | at org.moqui.impl.entity.EntityDatasourceFactoryImpl.init(EntityDatasourceFactoryImpl.groovy:84) ~[moqui-framework-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
moqui-server | at org.moqui.impl.entity.EntityFacadeImpl.initAllDatasources(EntityFacadeImpl.groovy:193) ~[moqui-framework-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
moqui-server | at org.moqui.impl.entity.EntityFacadeImpl.<init>(EntityFacadeImpl.groovy:120) ~[moqui-framework-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
moqui-server | at org.moqui.impl.context.ExecutionContextFactoryImpl.<init>(ExecutionContextFactoryImpl.groovy:198) ~[moqui-framework-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
Here is my moqui-ng-my-compose.yml file-
version: "2"
services:
nginx-proxy:
# For documentation on SSL and other settings see:
# https://github.com/jwilder/nginx-proxy
image: jwilder/nginx-proxy
container_name: nginx-proxy
restart: unless-stopped
ports:
- 80:80
# - 443:443
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
# - /path/to/certs:/etc/nginx/certs
moqui-server:
image: moqui
container_name: moqui-server
command: conf=conf/MoquiDevConf.xml
restart: unless-stopped
links:
- mysql-moqui
volumes:
- ./runtime/conf:/opt/moqui/runtime/conf
- ./runtime/lib:/opt/moqui/runtime/lib
- ./runtime/classes:/opt/moqui/runtime/classes
- ./runtime/ecomponent:/opt/moqui/runtime/ecomponent
- ./runtime/log:/opt/moqui/runtime/log
- ./runtime/txlog:/opt/moqui/runtime/txlog
# this one isn't needed: - ./runtime/db:/opt/moqui/runtime/db
- ./runtime/elasticsearch:/opt/moqui/runtime/elasticsearch
environment:
- entity_ds_db_conf=mysql
- entity_ds_host=localhost
- entity_ds_port=3306
- entity_ds_database=moquitest_20161126
- entity_ds_user=root
- entity_ds_password=123456
# CHANGE ME - note that VIRTUAL_HOST is for nginx-proxy so it picks up this container as one it should reverse proxy
- VIRTUAL_HOST=app.visvendra.hyd.company.com
- webapp_http_host=app.visvendra.hyd.company.com
- webapp_http_port=80
# - webapp_https_port=443
# - webapp_https_enabled=true
mysql-moqui:
image: mysql:5.7
container_name: mysql-moqui
restart: unless-stopped
# uncomment this to expose the port for use outside other containers
# ports:
# - 3306:3306
# edit these as needed to map configuration and data storage
volumes:
- ./db/mysql/data:/var/lib/mysql
# - /my/mysql/conf.d:/etc/mysql/conf.d
environment:
- MYSQL_ROOT_PASSWORD=123456
- MYSQL_DATABASE=moquitest_20161126
- MYSQL_USER=root
- MYSQL_PASSWORD=123456
Please let me know if any other information required.
Thanks in advance!!

bosh deploy error Error 190014

my bosh version is 1.3232.0
my platform is vsphere, i search the google and bosh site, it may relate to the cloud-config opt-in. but i have no idea anymore.
I create own mongodb release, when Upload the manifest, it throws Error 190014
Director task 163
Started preparing deployment > Preparing deployment. Failed: Deployment manifest should not contain cloud config properties: ["compilation", "networks", "resource_pools"] (00:00:00)
Error 190014: Deployment manifest should not contain cloud config properties: ["compilation", "networks", "resource_pools"]
my manifest is :
---
name: mongodb3
director_uuid: d3df0341-4aeb-4706-940b-6f4681090af8
releases:
- name: mongodb
version: latest
compilation:
workers: 1
reuse_compilation_vms: false
network: default
cloud_properties:
cpu: 4
datacenters:
- clusters:
- cf_z2:
resource_pool: mongodb
name: cf_z2
disk: 20480
ram: 4096
update:
canaries: 1
canary_watch_time: 15000-30000
update_watch_time: 15000-30000
max_in_flight: 1
networks:
- name: default
type: manual
subnets:
- cloud_properties:
name: VM Network
range: 10.62.90.133/25
gateway: 10.62.90.129
static:
- 10.62.90.140
reserved:
- 10.62.90.130 - 10.62.90.139
- 10.62.90.151 - 10.62.90.254
dns:
- 10.254.174.10
- 10.104.128.235
resource_pools:
- cloud_properties:
cpu: 2
datacenters:
- clusters:
- cf_z2:
resource_pool: mongodb
name: cf
disk: 10480
ram: 4096
name: mongodb3
network: default
stemcell:
name: bosh-vsphere-esxi-ubuntu-trusty-go_agent
version: latest
jobs:
- name: mongodb3
instances: 1
templates:
- {name: mongodb3, release: mongodb3}
persistent_disk: 10_240
resource_pools: mongodb3
networks:
- name: default
solved, these parts should be put in an single file, and deploy to bosh

Resources