Golang gocql cannot connect to Cassandra (using Docker) - database

I am trying to setup and connect to a Cassandra single node instance using docker and Golang and it is not working.
The closest information I could find to addressing connection issues between the golang gocql package and Cassandra is available here: Cassandra cqlsh - connection refused, however there are many different upvote answers with no clear indication of which is preferred. It is also a protected question (no "me toos"), so a lot of community members seem to be having trouble with this.
This problem should be slightly different, as it is using Docker and I have tried most (if not all of the solutions linked to above).
version: "3"
services:
cassandra00:
restart: always
image: cassandra:latest
volumes:
- ./db/casdata:/var/lib/cassandra
ports:
- 7000:7000
- 7001:7001
- 7199:7199
- 9042:9042
- 9160:9160
environment:
- CASSANDRA_RPC_ADDRESS=127.0.0.1
- CASSANDRA_BROADCAST_ADDRESS=127.0.0.1
- CASSANDRA_LISTEN_ADDRESS=127.0.0.1
- CASSANDRA_START_RPC=true
db:
restart: always
build: ./db
environment:
POSTGRES_USER: patientplatypus
POSTGRES_PASSWORD: SUPERSECRETFAKEPASSD00T
POSTGRES_DB: zennify
expose:
- "5432"
ports:
- 5432:5432
volumes:
- ./db/pgdata:/var/lib/postgresql/data
app:
restart: always
build:
context: .
dockerfile: Dockerfile
command: bash -c 'while !</dev/tcp/db/5432; do sleep 10; done; realize start --run'
# command: bash -c 'while !</dev/tcp/db/5432; do sleep 10; done; go run main.go'
ports:
- 8000:8000
depends_on:
- db
- cassandra00
links:
- db
- cassandra00
volumes:
- ./:/go/src/github.com/patientplatypus/webserver/
Admittedly, I am a little shaky on what listening addresses I should pass to Cassandra in the environment section, so I just passed 'home':
- CASSANDRA_RPC_ADDRESS=127.0.0.1
- CASSANDRA_BROADCAST_ADDRESS=127.0.0.1
- CASSANDRA_LISTEN_ADDRESS=127.0.0.1
If you try and pass 0.0.0.0 you get the following error:
cassandra00_1 | Exception (org.apache.cassandra.exceptions.ConfigurationException) encountered during startup: listen_address cannot be a wildcard address (0.0.0.0)!
cassandra00_1 | listen_address cannot be a wildcard address (0.0.0.0)!
cassandra00_1 | ERROR [main] 2018-09-10 21:50:44,530 CassandraDaemon.java:708 - Exception encountered during startup: listen_address cannot be a wildcard address (0.0.0.0)!
Overall, however I think that I am getting the correct start up procedure for Cassandra (afaict) because my terminal outputs that Cassandra started up as normal and is listening on the appropriate ports:
cassandra00_1 | INFO [main] 2018-09-10 22:06:28,920 StorageService.java:1446 - JOINING: Finish joining ring
cassandra00_1 | INFO [main] 2018-09-10 22:06:29,179 StorageService.java:2289 - Node /127.0.0.1 state jump to NORMAL
cassandra00_1 | INFO [main] 2018-09-10 22:06:29,607 NativeTransportService.java:70 - Netty using native Epoll event loop
cassandra00_1 | INFO [main] 2018-09-10 22:06:29,750 Server.java:155 - Using Netty Version: [netty-buffer=netty-buffer-4.0.44.Final.452812a, netty-codec=netty-codec-4.0.44.Final.452812a, netty-codec-haproxy=netty-codec-haproxy-4.0.44.Final.452812a, netty-codec-http=netty-codec-http-4.0.44.Final.452812a, netty-codec-socks=netty-codec-socks-4.0.44.Final.452812a, netty-common=netty-common-4.0.44.Final.452812a, netty-handler=netty-handler-4.0.44.Final.452812a, netty-tcnative=netty-tcnative-1.1.33.Fork26.142ecbb, netty-transport=netty-transport-4.0.44.Final.452812a, netty-transport-native-epoll=netty-transport-native-epoll-4.0.44.Final.452812a, netty-transport-rxtx=netty-transport-rxtx-4.0.44.Final.452812a, netty-transport-sctp=netty-transport-sctp-4.0.44.Final.452812a, netty-transport-udt=netty-transport-udt-4.0.44.Final.452812a]
cassandra00_1 | INFO [main] 2018-09-10 22:06:29,754 Server.java:156 - Starting listening for CQL clients on /127.0.0.1:9042 (unencrypted)...
cassandra00_1 | INFO [main] 2018-09-10 22:06:29,990 ThriftServer.java:116 - Binding thrift service to /127.0.0.1:9160
In my golang code I have the following package that is being called (simplified to show relevant section):
package data
import(
"fmt"
"github.com/gocql/gocql"
)
func create_userinfo_table() {
<...>
fmt.Println("replicating table in cassandra")
cluster := gocql.NewCluster("localhost") //<---error here!
cluster.ProtoVersion = 4
<...>
}
Which results in the following error in my terminal:
app_1 | [21:52:38][WEBSERVER] : 2018/09/10
21:52:38 gocql: unable to dial control conn 127.0.0.1:
dial tcp 127.0.0.1:9042: connect: connection refused
app_1 | [21:52:38][WEBSERVER] : 2018/09/10
21:52:38 gocql: unable to dial control conn ::1:
dial tcp [::1]:9042: connect: cannot assign requested address
app_1 | [21:52:38][WEBSERVER] : 2018/09/10
21:52:38 Could not connect to cassandra cluster: gocql:
unable to create session: control: unable to connect to initial hosts:
dial tcp [::1]:9042: connect: cannot assign requested address
I have tried several variations on the connection address
cluster := gocql.NewCluster("localhost")
cluster := gocql.NewCluster("127.0.0.1")
cluster := gocql.NewCluster("127.0.0.1:9042")
cluster := gocql.NewCluster("127.0.0.1:9160")
These seemed likely candidates for example, but no luck.
Does anyone have any idea what I am doing wrong?

Use the service name cassandra00 for the hostname per the docker-compose documentation https://docs.docker.com/compose/compose-file/#links
Containers for the linked service are reachable at a hostname identical to the alias, or the service name if no alias was specified.
Leave the CASSANDRA_LISTEN_ADDRESS envvar unset (or pass auto) per https://docs.docker.com/samples/library/cassandra/
The default value is auto, which will set the listen_address option in cassandra.yaml to the IP address of the container as it starts. This default should work in most use cases.

Related

(MongoSocketOpenException): Exception opening socket | Unknown host: wisdb1

I tried to connect MongoDb replica-set (3 nodes) with docker.
This is my docker-compose file, i renamed all services from e.g. "mongo1" to "mongoa1" because i have a second app with the same config file:
version: "3.8"
services:
mongoa1:
image: mongo:4
container_name: mongoa1
command: ["--replSet", "my-replica-set", "--bind_ip_all", "--port", "30001"]
volumes:
- ./data/mongoa-1:/data/db
ports:
- 30001:30001
healthcheck:
test: test $$(echo "rs.initiate({_id:'my-replica-set',members:[{_id:0,host:\"mongoa1:30001\"},{_id:1,host:\"mongoa2:30002\"},{_id:2,host:\"mongoa3:30003\"}]}).ok || rs.status().ok" | mongo --port 30001 --quiet) -eq 1
interval: 10s
start_period: 30s
mongoa2:
image: mongo:4
container_name: mongoa2
command: ["--replSet", "my-replica-set", "--bind_ip_all", "--port", "30002"]
volumes:
- ./data/mongoa-2:/data/db
ports:
- 30002:30002
mongoa3:
image: mongo:4
container_name: mongoa3
command: ["--replSet", "my-replica-set", "--bind_ip_all", "--port", "30003"]
volumes:
- ./data/mongoa-3:/data/db
ports:
- 30003:30003
Container are running.
❯ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fb1fcab13804 mongo:4 "docker-entrypoint.s…" 3 minutes ago Up 3 minutes 27017/tcp, 0.0.0.0:30003->30003/tcp mongoa3
72f8cfe217a5 mongo:4 "docker-entrypoint.s…" 3 minutes ago Up 3 minutes (healthy) 27017/tcp, 0.0.0.0:30001->30001/tcp mongoa1
2a61246f5d17 mongo:4 "docker-entrypoint.s…" 3 minutes ago Up 3 minutes 27017/tcp, 0.0.0.0:30002->30002/tcp mongoa2
I want to open Studio T3 but i get the following error:
Db path: mongodb://mongoa1:30001,mongoa2:30002,mongoa3:30003/app?replicaSet=my-replica-set
Connection failed.
SERVER [mongoa1:30001] (Type: UNKNOWN)
|_/ Connection error (MongoSocketOpenException): Exception opening socket
|____/ Unknown host: mongoa1
SERVER [mongoa2:30002] (Type: UNKNOWN)
|_/ Connection error (MongoSocketOpenException): Exception opening socket
|____/ Unknown host: mongoa2
SERVER [mongoa3:30003] (Type: UNKNOWN)
|_/ Connection error (MongoSocketOpenException): Exception opening socket
|____/ Unknown host: mongoa3
Details:
Timed out after 30000 ms while waiting for a server that matches com.mongodb.client.internal.MongoClientDelegate$1#4d84ad57. Client view of cluster state is {type=REPLICA_SET, servers=[{address=mongoa1:30001, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.UnknownHostException: mongoa1}}, {address=mongoa2:30002, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.UnknownHostException: mongoa2}}, {address=mongoa3:30003, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.UnknownHostException: mongoa3}}]
I don't understand what's wrong. When i rename back to "mongoa1" to "mongo1" it works. But i always have to delete the other docker app and i don't want. Whats wrong in my config?

In Python, how do I construct a URL to test if my SQL Server instance is running?

I'm using Python 3.8 with the pytest-docker-compose plugin -- https://pypi.org/project/pytest-docker-compose/ . Does anyone know how to write a URL that would eventually tell me if my SQL Server is running?
I have this docker-compose.yml file
version: "3.2"
services:
sql-server-db:
build: ./
container_name: sql-server-db
image: microsoft/mssql-server-linux:2017-latest
ports:
- "1433:1433"
environment:
SA_PASSWORD: "password"
ACCEPT_EULA: "Y"
but I don't know what URL to pass to my Retry object to test that the server is running. This fails ...
import pytest
import requests
from urllib3.util.retry import Retry
from requests.adapters import HTTPAdapter
...
#pytest.fixture(scope="function")
def wait_for_api(function_scoped_container_getter):
"""Wait for sql server to become responsive"""
request_session = requests.Session()
retries = Retry(total=5,
backoff_factor=0.1,
status_forcelist=[500, 502, 503, 504])
request_session.mount('http://', HTTPAdapter(max_retries=retries))
service = function_scoped_container_getter.get("sql-server-db").network_info[0]
api_url = "http://%s:%s/" % (service.hostname, service.host_port)
assert request_session.get(api_url)
return request_session, api_url
with this exception
raise ConnectionError(e, request=request)
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='0.0.0.0', port=1433): Max retries exceeded with url: / (Caused by ProtocolError('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')))
if connection.is_connected():
db_Info = connection.get_server_info()
print("Connected to MySQL Server version ", db_Info)
cursor = connection.cursor()
cursor.execute("select database();")
record = cursor.fetchone()
print("You're connected to database: ", record)
you could use something like this and it would output if it was connected
Here is a sample function that will retry to connect to the DB and won't return until it has successfully connected or the defined maxRetries is reached:
def waitDb(server, database, username, password, maxAttempts, waitBetweenAttemptsSeconds):
"""
Returns True if the connection is successfully established before the maxAttempts number is reached
Conversely returns False
pyodbc.connect has a built-in timeout. Use a waitBetweenAttemptsSeconds greater than zero to add a delay on top of this timeout
"""
for attemptNumber in range(maxAttempts):
cnxn = None
try:
cnxn = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL Server};SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+ password)
cursor = cnxn.cursor()
except Exception as e:
print(traceback.format_exc())
finally:
if cnxn:
print("The DB is up and running: ")
return True
else:
print("DB not running yet on attempt numer " + str(attemptNumber))
time.sleep(waitBetweenAttemptsSeconds)
print("Max attempts waiting for DB to come online exceeded")
return False
I wrote a minimal example here: https://github.com/claudiumocanu/flow-pytest-docker-compose-wait-mssql.
I included the three actions that can be executed independently, but you can jump to the last step specifically for what you asked:
1. Connected from python to the mssql launched by the compose-file:
For me it was quite annoying to find and install the appropriate ODBC Driver and its dependencies - ODBC Driver 17 for SQL Server worked the best for me on Ubuntu 18.
To perform only this step, docker-compose up the the docker-compose.yml in my example, then run the example-connect.py
2. Created a function that attempts to connect to the DB with a maxAttemptsNumber and a delay between retries:
Just run this example-waitDb.py. You can play with the maxAttempts and the delayBetweenAttempts values, then bring up the database at randomly, to test it.
3. Put everything together in the test_db.py test suite:
the waitDb function described above.
same wrapper and annotations that you provided in your example to spin-up the resources defined in the compose-file
a dummy integration test that will not be executed before waitDb returns (if you want to block this tests completely, you can throw instead of returning False from the waitDb function)
PS: Keep using ENVs/vault etc rather than storing the real passwords like I did for the dummy example.

Mongo transation exception when using the latest Spring Data Mongo reactive

When I tried the transaction feature with Mongo 4 and the latest Spring Data Mongo Reactive, I got the failure like this.
18:57:22.823 [main] ERROR org.mongodb.driver.client - Callback onResult call produced an error
reactor.core.Exceptions$ErrorCallbackNotImplemented: com.mongodb.MongoClientException: Sessions are not supported by the MongoDB cluster to which this client is connected
Caused by: com.mongodb.MongoClientException: Sessions are not supported by the MongoDB cluster to which this client is connected
at com.mongodb.async.client.MongoClientImpl$1.onResult(MongoClientImpl.java:90)
at com.mongodb.async.client.MongoClientImpl$1.onResult(MongoClientImpl.java:83)
at com.mongodb.async.client.ClientSessionHelper$2.onResult(ClientSessionHelper.java:80)
at com.mongodb.async.client.ClientSessionHelper$2.onResult(ClientSessionHelper.java:73)
at com.mongodb.internal.connection.BaseCluster$ServerSelectionRequest.onResult(BaseCluster.java:433)
at com.mongodb.internal.connection.BaseCluster.handleServerSelectionRequest(BaseCluster.java:297)
at com.mongodb.internal.connection.BaseCluster.selectServerAsync(BaseCluster.java:157)
at com.mongodb.internal.connection.SingleServerCluster.selectServerAsync(SingleServerCluster.java:41)
at com.mongodb.async.client.ClientSessionHelper.createClientSession(ClientSessionHelper.java:68)
at com.mongodb.async.client.MongoClientImpl.startSession(MongoClientImpl.java:83)
at com.mongodb.reactivestreams.client.internal.MongoClientImpl$1.apply(MongoClientImpl.java:153)
at com.mongodb.reactivestreams.client.internal.MongoClientImpl$1.apply(MongoClientImpl.java:150)
at com.mongodb.async.client.SingleResultCallbackSubscription.requestInitialData(SingleResultCallbackSubscription.java:38)
at com.mongodb.async.client.AbstractSubscription.tryRequestInitialData(AbstractSubscription.java:153)
at com.mongodb.async.client.AbstractSubscription.request(AbstractSubscription.java:84)
at com.mongodb.reactivestreams.client.internal.ObservableToPublisher$1$1.request(ObservableToPublisher.java:50)
at reactor.core.publisher.MonoNext$NextSubscriber.request(MonoNext.java:102)
at reactor.core.publisher.MonoProcessor.onSubscribe(MonoProcessor.java:399)
at reactor.core.publisher.MonoNext$NextSubscriber.onSubscribe(MonoNext.java:64)
at com.mongodb.reactivestreams.client.internal.ObservableToPublisher$1.onSubscribe(ObservableToPublisher.java:39)
at com.mongodb.async.client.SingleResultCallbackSubscription.<init>(SingleResultCallbackSubscription.java:33)
at com.mongodb.async.client.Observables$2.subscribe(Observables.java:76)
at com.mongodb.reactivestreams.client.internal.ObservableToPublisher.subscribe(ObservableToPublisher.java:36)
at reactor.core.publisher.MonoFromPublisher.subscribe(MonoFromPublisher.java:43)
at reactor.core.publisher.Mono.subscribe(Mono.java:3555)
at reactor.core.publisher.MonoProcessor.add(MonoProcessor.java:531)
at reactor.core.publisher.MonoProcessor.subscribe(MonoProcessor.java:444)
at reactor.core.publisher.MonoFlatMapMany.subscribe(MonoFlatMapMany.java:49)
at reactor.core.publisher.Flux.subscribe(Flux.java:7677)
at reactor.core.publisher.Flux.subscribeWith(Flux.java:7841)
at reactor.core.publisher.Flux.subscribe(Flux.java:7670)
at reactor.core.publisher.Flux.subscribe(Flux.java:7634)
at com.example.demo.DataInitializer.init(DataInitializer.java:42)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.context.event.ApplicationListenerMethodAdapter.doInvoke(ApplicationListenerMethodAdapter.java:261)
at org.springframework.context.event.ApplicationListenerMethodAdapter.processEvent(ApplicationListenerMethodAdapter.java:180)
at org.springframework.context.event.ApplicationListenerMethodAdapter.onApplicationEvent(ApplicationListenerMethodAdapter.java:142)
at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:172)
at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:165)
at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:139)
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:398)
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:355)
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:884)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:551)
at org.springframework.context.annotation.AnnotationConfigApplicationContext.<init>(AnnotationConfigApplicationContext.java:88)
at com.example.demo.Application.main(Application.java:24)
I used a initialization class to initialize this class.
#Component
#Slf4j
class DataInitializer {
private final ReactiveMongoOperations mongoTemplate;
public DataInitializer(ReactiveMongoOperations mongoTemplate) {
this.mongoTemplate = mongoTemplate;
}
#EventListener(value = ContextRefreshedEvent.class)
public void init() {
log.info("start data initialization ...");
this.mongoTemplate.inTransaction()
.execute(
s ->
Flux
.just("Post one", "Post two")
.flatMap(
title -> s.insert(Post.builder().title(title).content("content of " + title).build())
)
)
.subscribe(
null,
null,
() -> log.info("done data initialization...")
);
}
}
The subscribe caused this exception.
The source code is pushed to my github.
I just replace the content of DataInitializer with the new mongoTemplate.inTransaction().
PS: I used the latest Mongo in a Docker container to serve the mongodb service, at the moment, it was 4.0.1. The Docker console shows:
mongodb_1 | 2018-08-20T15:56:04.434+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
mongodb_1 | 2018-08-20T15:56:04.447+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=12635c1c3d2d
mongodb_1 | 2018-08-20T15:56:04.447+0000 I CONTROL [initandlisten] db version v4.0.1
mongodb_1 | 2018-08-20T15:56:04.448+0000 I CONTROL [initandlisten] git version: 54f1582fc6eb01de4d4c42f26fc133e623f065fb
UPDATE: When I tried to start up Mongo servers as Replica Set via a Docker Compose file:
version: "3"
services:
mongo1:
hostname: mongo1
container_name: localmongo1
image: mongo:4.0-xenial
ports:
- "27017:27017"
restart: always
entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0" ]
mongo2:
hostname: mongo2
container_name: localmongo2
image: mongo:4.0-xenial
ports:
- "27018:27017"
restart: always
entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0" ]
mongo3:
hostname: mongo3
container_name: localmongo3
image: mongo:4.0-xenial
ports:
- "27019:27017"
restart: always
entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0" ]
And change the Mongo uri string to:
mongodb://localhost:27017,localhost:27018,localhost:27019/blog
Then got the failure info like:
11:08:20.845 [main] INFO org.mongodb.driver.cluster - No server chosen by com.mongodb.async.client.ClientSessionHelper$1#796d3c9f from cluster description ClusterDescription{type=UNKNOWN,

cannot create JDBC datasource named transactional_DS while implementing Multi-instance in moqui using docker

As Multi-Tenant Functionality for Moqui Framework 2.0.0 has been removed, I am trying to implement same with Docker.
I just created image using-
$ ./docker-build.sh
Modified- moqui-ng-my-compose.yml
./compose-run.sh moqui-ng-my-compose.yml
Exception occurred: | 08:07:47.864 INFO main .moqui.i.c.TransactionInternalBitronix Initializing DataSource transactional_DS (mysql) with properties: [uri:jdbc:mysql://127.0.0.1:3306/moquitest_20161126?autoReconnect=true&useUnicode=true&characterEncoding=UTF-8, user:root]
moqui-server | 08:07:51.868 ERROR main o.moqui.i.w.MoquiContextListener Error initializing webapp context: bitronix.tm.resource.ResourceConfigurationException: cannot create JDBC datasource named transactional_DS
moqui-server | bitronix.tm.resource.ResourceConfigurationException: cannot create JDBC datasource named transactional_DS
moqui-server | at bitronix.tm.resource.jdbc.PoolingDataSource.init(PoolingDataSource.java:91) ~[btm-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
moqui-server | at org.moqui.impl.context.TransactionInternalBitronix.getDataSource(TransactionInternalBitronix.groovy:129) ~[moqui-framework-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
moqui-server | at org.moqui.impl.entity.EntityDatasourceFactoryImpl.init(EntityDatasourceFactoryImpl.groovy:84) ~[moqui-framework-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
moqui-server | at org.moqui.impl.entity.EntityFacadeImpl.initAllDatasources(EntityFacadeImpl.groovy:193) ~[moqui-framework-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
moqui-server | at org.moqui.impl.entity.EntityFacadeImpl.<init>(EntityFacadeImpl.groovy:120) ~[moqui-framework-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
moqui-server | at org.moqui.impl.context.ExecutionContextFactoryImpl.<init>(ExecutionContextFactoryImpl.groovy:198) ~[moqui-framework-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
Here is my moqui-ng-my-compose.yml file-
version: "2"
services:
nginx-proxy:
# For documentation on SSL and other settings see:
# https://github.com/jwilder/nginx-proxy
image: jwilder/nginx-proxy
container_name: nginx-proxy
restart: unless-stopped
ports:
- 80:80
# - 443:443
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
# - /path/to/certs:/etc/nginx/certs
moqui-server:
image: moqui
container_name: moqui-server
command: conf=conf/MoquiDevConf.xml
restart: unless-stopped
links:
- mysql-moqui
volumes:
- ./runtime/conf:/opt/moqui/runtime/conf
- ./runtime/lib:/opt/moqui/runtime/lib
- ./runtime/classes:/opt/moqui/runtime/classes
- ./runtime/ecomponent:/opt/moqui/runtime/ecomponent
- ./runtime/log:/opt/moqui/runtime/log
- ./runtime/txlog:/opt/moqui/runtime/txlog
# this one isn't needed: - ./runtime/db:/opt/moqui/runtime/db
- ./runtime/elasticsearch:/opt/moqui/runtime/elasticsearch
environment:
- entity_ds_db_conf=mysql
- entity_ds_host=localhost
- entity_ds_port=3306
- entity_ds_database=moquitest_20161126
- entity_ds_user=root
- entity_ds_password=123456
# CHANGE ME - note that VIRTUAL_HOST is for nginx-proxy so it picks up this container as one it should reverse proxy
- VIRTUAL_HOST=app.visvendra.hyd.company.com
- webapp_http_host=app.visvendra.hyd.company.com
- webapp_http_port=80
# - webapp_https_port=443
# - webapp_https_enabled=true
mysql-moqui:
image: mysql:5.7
container_name: mysql-moqui
restart: unless-stopped
# uncomment this to expose the port for use outside other containers
# ports:
# - 3306:3306
# edit these as needed to map configuration and data storage
volumes:
- ./db/mysql/data:/var/lib/mysql
# - /my/mysql/conf.d:/etc/mysql/conf.d
environment:
- MYSQL_ROOT_PASSWORD=123456
- MYSQL_DATABASE=moquitest_20161126
- MYSQL_USER=root
- MYSQL_PASSWORD=123456
Please let me know if any other information required.
Thanks in advance!!

What's wrong with this simple file.managed saltstack configuration?

From a fresh salt stack installation on server and client, the goal is to serve a file with a number inside:
SERVER
$vim /etc/salt/master
...
file_roots:
base:
- /srv/salt
...
$echo 1 > /srv/salt/tmp/salt.config.version
$cat /srv/salt/top.sls
base:
'*':
- tmpversion
$cat /srv/salt/tmpversion/init.sls
/tmp/salt.config.version:
file.managed:
- source: salt://tmp/salt.config.version
- user: root
- group: root
- mode: 644
CLIENT (minion)
$vim /etc/salt/minion
...
master: <masterhostnamehere>
...
I'm using salt '*' state.sls tmpversion to apply the configuration. I don't know how to get the changes applied automatically..
Salt doesn't do anything until you tell it to. So that means that you have to run the salt command on the cli when you want a state to be applied, or you can use Salt's internal scheduler or your system's cron to run the job regularly.

Resources