Trying with VoltDB cluster, created a cluster of 2 nodes with k=1
Cluster initialization was successful, both the nodes are up.
Now, how do i connect to this cluster, i could not find any documentation to setup single IP for cluster.
Will the client connect to particular node IP or cluster IP ?
I am using VoltDB community edition.
In general, you can connect to one node, or to multiple nodes. For simple usage, one node is fine. For a client application where you want lower latency and higher throughput, you should connect to all of the nodes in the cluster. See Connecting to the VoltDB Database for the java client, and in particular section 6.1.2 on using the auto-connecting client which enables you to connect to only one node and the client will automatically connect to all of the other nodes.
For command-line access, see the sqlcmd reference:
--servers=server-id[,...]
Specifies the network address of one or more nodes in the database cluster. By default, sqlcmd attempts to connect to a database on localhost.
Disclosure: I work at VoltDB.
If you wish to connect to a single node try
jdbc:voltdb://192.168.1.5:<port>
as the connection URL or if you wish to connect to cluster try
jdbc:voltdb://192.168.1.5:<port>,192.168.1.6:<port>,<any additional nodes you might have in your cluster>
as the connection url.
Related
yugabyte cluster has 2 regions, 3 AZs, 6 node architecture.
4 nodes in central region,
2 nodes in east region
Node1 (Master, TServer) US-east
Node2 (Master, TServer) US-central-1
Node3 (TServer) US-central-1
Node4 (TServer) US-east
Node5 (Master, TServer) US-central-2 (Leader)
Node6 (TServer) US-central-2
Application is running in central region.
Application is using YCQL driver(yugabyte gocql client) that is currently configured to send SQL queries to Node2(only)
As mentioned here:
In many cases, this forwarding will be purely local, because both CQL and Redis Cluster clients are capable of sending requests to the right server and avoiding an additional network hop.
Is the above statement about CQL client referring to yugabyte gocql client? here it mentions: "The driver can route queries to nodes that hold data replicas based on partition key (preferring local DC)."
How can a client driver know, which tablet server to send the request?
if yes, does connection configuration of YCQL driver having connections with all 4 nodes(in central region), makes the client driver capable of knowing the correct tablet server, to send query? improving the query response time
if yes, Hoes YCQL driver know, which is the right tablet server to send query request(INSERT/UPDATE/SELECT)?
How can a client driver know, which tablet server to send the request?
The driver periodically queries this table:
ycqlsh:system> select * from system.partitions;
Where it finds how tables are split, and where the tablets are located.
When you send a query, you pass the partition-keys in a way the driver understands them and hashes them and knows where to send them.
if yes, does connection configuration of YCQL driver having connections with all 4 nodes(in central region), makes the client driver capable of knowing the correct tablet server, to send query? improving the query response time
Yes. This should be combined with DC Aware query routing: https://pkg.go.dev/github.com/gocql/gocql#hdr-Data_center_awareness_and_query_routing
if yes, Hoes YCQL driver know, which is the right tablet server to send query request(INSERT/UPDATE/SELECT)?
Using the same logic as above. Knowing the tablet locations on all the cluster.
The "Real Time Metrics" panel of my MongoDB Atlas cluster, shows 36 connections, even though I terminated all server apps that were supposed to be connected to it. Currently nothing should be connected to it, but I still see those 36 connections. I tried pausing the cluster and then resuming it - the connections came back. Is there any way for me to find out where are they coming from? OR, terminating all connections.
Each connection is supposed to provide with it what is called "app metadata". This is supposed to always include:
The driver identifier (e.g. pymongo 1.2.3)
The platform of the client (e.g. linux amd64)
Additionally, you can provide your own information to be sent as part of client metadata which you can use to identify your application. See e.g. https://docs.mongodb.com/ruby-driver/master/tutorials/ruby-driver-create-client/ :app_name option.
Atlas has internal processes that connect to cluster nodes and cluster nodes communicate with each other also. All of these add to connection count seen on each node.
To figure out where connections are coming from:
Read the server logs (which you have to download first) to obtain the client metadata sent with each connection.
Hopefully this will provide enough clues to identify cluster to cluster connections. You should also be able to tell those by source IPs which you should be able to dig out of cluster configuration.
Atlas connections should be using either Go or Java drivers, if you don't use those in your own applications this would be an easy way of telling those apart.
Add app name to all of your application connections to eliminate those from the unknown ones.
There is no facility provided by MongoDB server to terminate connections from clients. You can kill operations and sessions but connections used for those operations would remain until the clients close them. When clients close connections depends on the particular driver used and connection pool settings, see e.g. https://docs.mongodb.com/ruby-driver/master/tutorials/ruby-driver-create-client/#connection-pooling.
Can somebody answer following questions. I have 2 Azure VMs with HAG setup but no HAG listeners are setup. The reason being I'm confused where those static IPs are supposed to come from and if they are needed in a first place.
Questions:
Why do I need HAG listener at all if I can just use IP address of each host to connect to SQL
If add additional IP addresses, are those supposed to be manually added to TCP/IP properties of adapter or WSFC will take care of that during failover
What is difference between using HAG DNS name vs just using database mirroring type (Data Source/Failover Partner) connection string. They seemed to be doing the same thing, i.e. providing alternative IPs where service is being hosted
Does WSFC needs to have "Server Name" under core cluster resources? What is the point of that name in terms of HAG functionality? Can I just delete it?
Why do I need HAG listener at all if I can just use IP address of each host to connect to SQL
Answer:
Listener is part of cluster resources. Connection first connect to listener, and depends the setting it will be relay to different replica. Of course, you can connect to each of the replica directly by their instance name or by IP. However, having listener would provide you the HA. That is, if your primary replica failover to secondary replica, the listener will automatically point to the new primary replica.
If add additional IP addresses, are those supposed to be manually added to TCP/IP properties of adapter or WSFC will take care of that during failover
Answer:
I assume here you were asking additional IP for listener. Noticed you have replicas in multiple subnets. Your listener had to have two IPs, each for separate subnets. These settings could not be manually added to TCP/IP. You had to configure them while creating listener.
What is difference between using HAG DNS name vs just using database mirroring type (Data Source/Failover Partner) connection string. They seemed to be doing the same thing, i.e. providing alternative IPs where service is being hosted
Answer:
Mirroring is in singel database level.
AG is for group of databases.
Both use end points to communicate between.
Does WSFC needs to have "Server Name" under core cluster resources?
What is the point of that name in terms of HAG functionality? Can I just delete it?
Answer:
WSFC is the foundation of AG. You need to create WSFC first. It has it's name and IP and other properties. No you cannot delete it.
I'm working on logs. I want to reproduce a log in which the application fails to connect to the server.
Currently the commands I'm using are
db2 force applications all
This closes all the connections and then one by one I deactivate each database using
db2 deactivate db "database_name"
What happens is that it temporary blocks the connections and after a minute my application is able to create the connection again, due to which I am not able to regenerate the log. Any Ideas how can I do this?
What you are looking for is QUIESCE.
By default users can connect to a database. It becomes active and internal in-memory data structures are initialized. When the last connection closes, the database becomes inactive. Activating a database puts and leaves them initialized and "ready to use".
Quiescing the database puts them into an administrative state. Regular users cannot connect. You can quiesce a single database or the entire instance. See the docs for some options to manage access to quiesced instances. The following forces all users off the current database and keeps them away:
db2 quiesce db immediate
If you want to produce a connection error for an app, there are other options. Have you ever tried to connect to a non-estisting port, Db2 not listening on it? Or revoke connect privilege for that user trying to connect.
There are several testing strategies that can be used, they involve disrupting the network connection between client and server:
Alter the IP routing table on the client to route the DB2 server address to a non-existent subnet
Use the connection via a proxy software that can be turned off, there is a special proxy ToxiProxy, which was designed for the purpose of testing network disruptions
Pull the Ethernet cable from the client machine, observe then plug it back in (I've done this)
This has the advantage of not disabling the DB2 server for other testing in progress.
We have two nodes availability group. The two nodes being SQL cluster1- node1 and SQL cluster 2- node2 and a Availability group listener. The Java application is connecting to this listener and all is working fine initially i.e application is able to perform both read/writes on the database, untill we do a failover.
The connector string is driverURL=jdbc:jtds:sqlserver://[Listerner DNS Name]:[Port]/[Database]
Say initially the node1 was primary and node2 was the secondary.
After failover, node1 becomes secondary and node2 becomes primary. Now the application is still able to connect to the database but only able to perform reads on the database. The application throws exceptions (which is mentioned in the title) if we try to do inserts on that DB.
Basically what I need is for the application to be able to perform read/writes all the time irrespective of which node is the primary. Any ideas ?
There should be no reason why you get a read-only database when the connection string is pointing to the listener. That's the point of the avail grp listener - to direct flow to the read/write (primary) database. Ping the DNS name and check that it resolves to the listener (before and after an AG failover). Unfortunatelyy I don't use Java so can't help you any further. Cheers, Mark.