Does the master database need to be in the same host where symmetricds runs? - symmetricds

That is the configuration of the master node
engine.name=master
db.driver=com.mysql.jdbc.Driver
db.url=jdbc:mysql://192.168.1.55:3306/master-db?useSSL=false
db.user=root
db.password=password
registration.url=
sync.url=http://192.168.1.55:31415/sync/master-db
group.id=master
external.id=0
# Don't muddy the waters with purge logging
job.purge.period.time.ms=7200000
# This is how often the routing job will be run in milliseconds
job.routing.period.time.ms=5000
# This is how often the push job will be run.
job.push.period.time.ms=5000
# This is how often the pull job will be run.
job.pull.period.time.ms=5000
# Kick off initial load
initial.load.create.first=true
That is the configuration of the child node
engine.name=italian-restaurant
db.driver=com.mysql.jdbc.Driver
db.url=jdbc:mysql://192.168.1.5:3306/italian_restaurant_db?useSSL=false
db.user=root
db.password=password
registration.url=
sync.url=http://192.168.1.55:31415/sync/child-db
group.id=restaurants
external.id=1
# Don't muddy the waters with purge logging
job.purge.period.time.ms=7200000
# This is how often the routing job will be run in milliseconds
job.routing.period.time.ms=5000
# This is how often the push job will be run.
job.push.period.time.ms=5000
# This is how often the pull job will be run.
job.pull.period.time.ms=5000
# Kick off initial load
initial.load.create.first=true
And all this works fine, but if in the master properties change the host IP of the master DB to another IP (Because I have the database in the cloud) the connection to master DB in the cloud works fine because all symmetricds tables are created and the default configuration is loaded but the registration of nodes, not works.
Throw warn alert Registration was no open
This only happens if the master database is not in the same host where symmetricds runs
Thanks, I hope for your answers

There is no requirement for SymmetricDS to be on the same host as the database. I would have expected your scenario to work exactly the same as with the local database.
In the master.properties did you only change the ip address in the db.url?
On a side note, it is usually a good idea to have your SymmetricDS instance on the same network with good bandwidth to your database for optimal performance (as JDBC can be chatty).

Related

Active Directory replication between multiple controllers fails

I am coming to the forum because I have a big problem with the replication of my domain controllers.
I explain the situation:
Context :
I have 2 local sites connected with IPSec, let's call them site A and site B.
In each site I have two domain controllers let's call them for site A DC1 and DC2 and for site B DC3 and DC4.
The 4 controllers are synchronized between them in inter site and intra site.
The 2 DC of site A are virtualized with Hyper V.
The 2 DC of site B are physical.
Normally DC1 is the master DC.
Problem:
I ran a domain configuration audit script on DC1 that was supposed to run in audit mode but unfortunately made big changes on the domain. Basically the script applied the best practices of all the CIS checkpoints (which in fact is fine) but it impacted the business of the company. This is because all the DC's synced with the DC1 which pushed the changes automatically to the other DC's.
Fortunately, we have an extremely recent backup (snapshot) of the hyper V that we used to restore the DC1. However, when we start the restored DC1 VM, the other DCs (2,3,4) that have the bad changes replicate them to the DC1 automatically (15 seconds) so we can't restore our domain controllers from the DC1 snapshot.
In order to find a solution, we disabled the auto replication in INBOUND and OUTBOUND on the DC2,3,4 (repadmin /options DCx +DISABLE_INBOUND_REPL) (repadmin /options DCx +DISABLE_OUTBOUND_REPL) then restored the snapshot of the DC1 VM and launched the DC1. It works perfectly, the DC1 keeps the good modifications (the old ones, before the script execution), so we now want to apply the settings of the DC1 on all the DCs to get a homogeneous domain. So we force the replication of DC1 on the other DCs with the command: Repadmin /syncall DC1 /APed.
This propagated the good configuration of DC1 on the other DCs so it's perfect.
However, by reactivating the INBOUND and OUTBOUND (repadmin /options DCx -DISABLE_INBOUND_REPL) (repadmin /options DCx -DISABLE_OUTBOUND_REPL) auto replication on the DCs, the bad modifications unfortunately reappeared and propagated on all the DCs almost immediately.
How is this possible knowing that at a given time "T" the 4 domain controllers all had the old good configuration (before the script was executed)?
Where did the DC's go to get the wrong configuration (after the script was executed)?
How do we keep the right config on all the DCs once we reactivate the replications by reactivating the INBOUND and OUTBOUND?
I thank you in advance for your answers, the situation is very critical.

How can i get alarmed if the master's GTID differs from the slave?

The MaxScale distributes the requests to the MariaDB database -> master/slave server on which the database is located.
What i need is a script running as a cron or something similar which verifies the GTID from master and slaves. If the slaves GTID differs from the masters GTID i want to be informed/alarmed via email.
Unfortunately i have no idea if this is possible somehow and how to do it
You can enable gtid_strict_mode to automatically stop the replication if GTIDs from the same domain conflict with what is already in the binlogs. If you are using MaxScale, it will automatically detect this and stop using it.
Note that this will not prevent transactions from other GTID domains from causing problems with your data. This just means you'll have to pay some attention if you're using multi-domain replication.
If you want to be notified of this, you can use the script option in MaxScale to trigger a custom script to be launched whenever the server stops replicating.

How to Configure Keepalived that When Master is Available After Falling, It Will not Route to Master

How can I configure Keepalived that when Master is available after falling, it will continue to route to backup until backup goes down. My currently configuration always routes to Master if it is available. When Master is gone, it routes to Backup and when Master is available again, it routes to the Master. I don't want that. I want it route whenever the current routed server is gone, not always to Master. So it can be understood like there are two Masters. Is it possible?
Servers are Ubuntu 18.04
You can do this by defining state MASTER on both, and having priority set to the same value on both.
You can use check scripts to decrease priority when you detect a failure.
When the failure condition is resolved, both will have the same priority, but the one that is currently master will continue being a master.

MongoDB replication set without restarting database

I have a mongoDB database running in one server. This is its configuration file:
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
# engine:
# mmapv1:
# wiredTiger:
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# network interfaces
net:
port: 27017
bindIp: 0.0.0.0
ssl:
mode: requireSSL
PEMKeyFile: /etc/ssl/mongo.pem
#processManagement:
#security:
security:
authorization: enabled
#operationProfiling:
#replication:
#sharding:
## Enterprise-Only Options:
#auditLog:
#snmp:
setParameter:
failIndexKeyTooLong: false
I have created a service to launch the mongoDB each time the server starts or each time the database is down.
This configuration is working so far.
Now I have cloned this server into another one. The configuration is identical except for the server IP and the server domain.
This new server is working too but I would like to connect both databases so the new database is synchronized with the first one as with a master-slave configuration.
I think this is the typical case of a Mongo DB Replication Set with 2 databases. But I’m not very expert with databases and after reading lots of documents I don’t understand very much how to do it.
For example, it seems that all options require to turn off master database before making the synchronization, but in my case the master database is in a production environment so I would like to avoid this. Is there any option to configure the replication set without having to restart the master mongoDB instance?
I’ve checked the reference of the replication options in the connfiguration file too but I don’t know how to use them.
In conclusion, is there any tutorial about how to create a replication set with 2 mongodb databases and if it’s possible without having to restart the master (in production environment) database?

Block all connections to a database db2

I'm working on logs. I want to reproduce a log in which the application fails to connect to the server.
Currently the commands I'm using are
db2 force applications all
This closes all the connections and then one by one I deactivate each database using
db2 deactivate db "database_name"
What happens is that it temporary blocks the connections and after a minute my application is able to create the connection again, due to which I am not able to regenerate the log. Any Ideas how can I do this?
What you are looking for is QUIESCE.
By default users can connect to a database. It becomes active and internal in-memory data structures are initialized. When the last connection closes, the database becomes inactive. Activating a database puts and leaves them initialized and "ready to use".
Quiescing the database puts them into an administrative state. Regular users cannot connect. You can quiesce a single database or the entire instance. See the docs for some options to manage access to quiesced instances. The following forces all users off the current database and keeps them away:
db2 quiesce db immediate
If you want to produce a connection error for an app, there are other options. Have you ever tried to connect to a non-estisting port, Db2 not listening on it? Or revoke connect privilege for that user trying to connect.
There are several testing strategies that can be used, they involve disrupting the network connection between client and server:
Alter the IP routing table on the client to route the DB2 server address to a non-existent subnet
Use the connection via a proxy software that can be turned off, there is a special proxy ToxiProxy, which was designed for the purpose of testing network disruptions
Pull the Ethernet cable from the client machine, observe then plug it back in (I've done this)
This has the advantage of not disabling the DB2 server for other testing in progress.

Resources