SymmetricDS: sync client nodes to each other - symmetricds

I have symmetricDS configured so that there is one master node in the cloud, and then two "store" (client) nodes in remote locations.
If I insert data in the cloud, it is syncd to both clients. If I insert data in a client, it is syncd to the cloud.
However, data added on client1 never makes it to client2 and data added on client2 never makes it to client1...
Any ideas on this?
Thanks

Yes you would want a second set of triggers (maybe prefix each with the name cloud_*) that has an additional flag turned on sym_trigger.sync_on_incoming_batch=1. This will cause changes coming in as part of replication from client 1..n to be captured and resent to all other clients.
This can be more efficient that a client to client group link solution because usually the clients do not all have access over a network to sync to each other. So the change would sync to the cloud and then be redistributed to the other clients.

Related

How does gocql client pick the right Tablet server, for any query?

yugabyte cluster has 2 regions, 3 AZs, 6 node architecture.
4 nodes in central region,
2 nodes in east region
Node1 (Master, TServer) US-east
Node2 (Master, TServer) US-central-1
Node3 (TServer) US-central-1
Node4 (TServer) US-east
Node5 (Master, TServer) US-central-2 (Leader)
Node6 (TServer) US-central-2
Application is running in central region.
Application is using YCQL driver(yugabyte gocql client) that is currently configured to send SQL queries to Node2(only)
As mentioned here:
In many cases, this forwarding will be purely local, because both CQL and Redis Cluster clients are capable of sending requests to the right server and avoiding an additional network hop.
Is the above statement about CQL client referring to yugabyte gocql client? here it mentions: "The driver can route queries to nodes that hold data replicas based on partition key (preferring local DC)."
How can a client driver know, which tablet server to send the request?
if yes, does connection configuration of YCQL driver having connections with all 4 nodes(in central region), makes the client driver capable of knowing the correct tablet server, to send query? improving the query response time
if yes, Hoes YCQL driver know, which is the right tablet server to send query request(INSERT/UPDATE/SELECT)?
How can a client driver know, which tablet server to send the request?
The driver periodically queries this table:
ycqlsh:system> select * from system.partitions;
Where it finds how tables are split, and where the tablets are located.
When you send a query, you pass the partition-keys in a way the driver understands them and hashes them and knows where to send them.
if yes, does connection configuration of YCQL driver having connections with all 4 nodes(in central region), makes the client driver capable of knowing the correct tablet server, to send query? improving the query response time
Yes. This should be combined with DC Aware query routing: https://pkg.go.dev/github.com/gocql/gocql#hdr-Data_center_awareness_and_query_routing
if yes, Hoes YCQL driver know, which is the right tablet server to send query request(INSERT/UPDATE/SELECT)?
Using the same logic as above. Knowing the tablet locations on all the cluster.

Finding out sources of connections to MongoDB cluster

The "Real Time Metrics" panel of my MongoDB Atlas cluster, shows 36 connections, even though I terminated all server apps that were supposed to be connected to it. Currently nothing should be connected to it, but I still see those 36 connections. I tried pausing the cluster and then resuming it - the connections came back. Is there any way for me to find out where are they coming from? OR, terminating all connections.
Each connection is supposed to provide with it what is called "app metadata". This is supposed to always include:
The driver identifier (e.g. pymongo 1.2.3)
The platform of the client (e.g. linux amd64)
Additionally, you can provide your own information to be sent as part of client metadata which you can use to identify your application. See e.g. https://docs.mongodb.com/ruby-driver/master/tutorials/ruby-driver-create-client/ :app_name option.
Atlas has internal processes that connect to cluster nodes and cluster nodes communicate with each other also. All of these add to connection count seen on each node.
To figure out where connections are coming from:
Read the server logs (which you have to download first) to obtain the client metadata sent with each connection.
Hopefully this will provide enough clues to identify cluster to cluster connections. You should also be able to tell those by source IPs which you should be able to dig out of cluster configuration.
Atlas connections should be using either Go or Java drivers, if you don't use those in your own applications this would be an easy way of telling those apart.
Add app name to all of your application connections to eliminate those from the unknown ones.
There is no facility provided by MongoDB server to terminate connections from clients. You can kill operations and sessions but connections used for those operations would remain until the clients close them. When clients close connections depends on the particular driver used and connection pool settings, see e.g. https://docs.mongodb.com/ruby-driver/master/tutorials/ruby-driver-create-client/#connection-pooling.

Block all connections to a database db2

I'm working on logs. I want to reproduce a log in which the application fails to connect to the server.
Currently the commands I'm using are
db2 force applications all
This closes all the connections and then one by one I deactivate each database using
db2 deactivate db "database_name"
What happens is that it temporary blocks the connections and after a minute my application is able to create the connection again, due to which I am not able to regenerate the log. Any Ideas how can I do this?
What you are looking for is QUIESCE.
By default users can connect to a database. It becomes active and internal in-memory data structures are initialized. When the last connection closes, the database becomes inactive. Activating a database puts and leaves them initialized and "ready to use".
Quiescing the database puts them into an administrative state. Regular users cannot connect. You can quiesce a single database or the entire instance. See the docs for some options to manage access to quiesced instances. The following forces all users off the current database and keeps them away:
db2 quiesce db immediate
If you want to produce a connection error for an app, there are other options. Have you ever tried to connect to a non-estisting port, Db2 not listening on it? Or revoke connect privilege for that user trying to connect.
There are several testing strategies that can be used, they involve disrupting the network connection between client and server:
Alter the IP routing table on the client to route the DB2 server address to a non-existent subnet
Use the connection via a proxy software that can be turned off, there is a special proxy ToxiProxy, which was designed for the purpose of testing network disruptions
Pull the Ethernet cable from the client machine, observe then plug it back in (I've done this)
This has the advantage of not disabling the DB2 server for other testing in progress.

How to make client PC work independently from Server

(Sorry, I don't know exactly should I ask this question here in stackoverflow or other related sites. please move it if isn't appropriate)
There are some unrelated groups of students which each group members produce data together. each member use his credentials to login to the client desktop application and send data to the server. other group members should see new data if they login with their credentials.
The problem comes in when I want to make the client works although there was an error in connection to the server. I don't want to stop them if they couldn't connect to server, so they make data and later will send it to server.
here is the problem, without connecting to the server, how can I make memberships in the client and be sure he really belongs to the group, or how can I know other member has new data in the local machine so his collegues can see it?
I don't want to use another local server, just a remote server and a local machine with database.
If the server is unavailable, the only way to know if a person is in a group is if you have that information stored on the client. Of course, someone could be removed from a group and the (disconnected) client does not know it.
Solving this would depend on how your system is used.
If membership does not change very often, you could use the client-saved membership as long as it is not too old (e.g. use it if it is less than 4 hours old, or some such rule).
Whenever the client re-connects to the server, it should ensure that its local data is refreshed if it older than N-hours
In addition, when the connection is re-established, and the data is sent to the server, the server should check once more. if the user has been removed from a group, the server can reject the data

SQL Server Event Notifications & Service Broker - minimum req'd for multiple servers?

I'm trying to figure out the easiest way to send SQL Server Event Notifications to a separate server using service broker. I've built an endpoint on each server, a queue on each server, working on Dialogs and Contracts and activation... but do I need any of that?
CREATE EVENT NOTIFICATION says it can send the notification XML to a "target service" - so could I just create a contract on the "sending" server that points to a queue on a "receiving server", and use activation there?
Or do I need to have it send to a local queue and then forward on to the receiving server's queue? Thanks!
You can target the remote service, but you have to have the ROUTEs defined for bidirectional communication so that you get the Acknowledgement message back. I once had a script for creating a centralized processing server for all Event Notifications, and the other servers targeted it's service. If I can find it I'll post it on my blog and update this with a link.

Resources