SAP HANA: Connect to systemdb on worker/slave node - database

I have a 2 node sap hana scale-out setup. I'd like to connect on the worker node(Node2) on database systemdb using hdbsql. However I get the below error while connecting on Port 3xx13:
* -10709: Connection failed (RTE:[89006] System call 'connect' failed, rc=111:Connection refused (localhost:30013))
On the Master node(Node1), the output of sql "select * from sys_databases.m_services" does not show a row for SYSTEMDB on Node2
I am able to access other tenant databases that are distributed across the 2 nodes based on the SQL_PORT value of the above SQL query.
Can someone help me with this?

You cannot connect to systemdb on worker nodes with SQL. Only the master node offers an SQL interface and manages all other nodes.

Related

Failed to update 'database_name' database is read only in Always on Availability group

Background:
I have a Always On Availability Group Setup with 4 Nodes(DB1,DB2,DB3,DB4). I am using File Share witness hosted on some other server. All the nodes on AG are set to failover automatically. And as readable secondary option set to 'yes'.
Issue:
For instance, lets consider DB1 as primary node on both AG and WSFC. Now,whenever I stop the MSSQL service on DB1, DB2 or the other 2 becomes primary on AG. However, DB1 stays as primary host on the WSFC. The main problem here is whenever my application tries to connect to the DB, I get an error as "Failed to update 'Dbname' database is read only". But when I manually change the node of WSFC to the now Primary AG DB my application starts working. Can someone please help me out here.
whenever my application tries to connect to the DB, I get an error as "Failed to update 'Dbname' database is read only"
Your app needs to connect to the AG through the AG Listener, which will only be active on the node hosting the primary database replica.
If your app is connecting to the listener you need to troubleshoot the connection. Verify that DNS resolution is returning all the IP addresses, and that only the IP address hosting the primary replica has the requested SQL Server port open.

ERROR 1772 ER_DISTRIBUTED_PARTITION_MOVE_FAILED: CREATE PARTITION DATABASE for operation `Create DB fail :Leaf Error reading from socket

I created a test lab for Singlestore on my local PC. I have a Master, child, leaf 1, and leaf 2 node in docker containers. I can connect to singlestore studio and I can see the system databases. When trying to create a new database (create database JTest) I get the error:
create database JTest ERROR 1772 ER_DISTRIBUTED_PARTITION_MOVE_FAILED:
CREATE PARTITION DATABASE for operation CREATE DATABASE failed with
error 2004:Leaf Error (172...:3306): Error reading from socket.
Errno=104 (Connection reset by peer)
I have checked the memsql.log file and it looks like connection issues or something to the leaves.
SingleStore version 7.8 now supports Flexible Parallelism, which allows multiple cores on the same node to access the same database partition.
With Flexible Parallelism, as database partitions are created they are divided into sub-partitions.
Go thru the official documentation of Singlestore for more info
Click here

Setup VoltDB Cluster IP

Trying with VoltDB cluster, created a cluster of 2 nodes with k=1
Cluster initialization was successful, both the nodes are up.
Now, how do i connect to this cluster, i could not find any documentation to setup single IP for cluster.
Will the client connect to particular node IP or cluster IP ?
I am using VoltDB community edition.
In general, you can connect to one node, or to multiple nodes. For simple usage, one node is fine. For a client application where you want lower latency and higher throughput, you should connect to all of the nodes in the cluster. See Connecting to the VoltDB Database for the java client, and in particular section 6.1.2 on using the auto-connecting client which enables you to connect to only one node and the client will automatically connect to all of the other nodes.
For command-line access, see the sqlcmd reference:
--servers=server-id[,...]
Specifies the network address of one or more nodes in the database cluster. By default, sqlcmd attempts to connect to a database on localhost.
Disclosure: I work at VoltDB.
If you wish to connect to a single node try
jdbc:voltdb://192.168.1.5:<port>
as the connection URL or if you wish to connect to cluster try
jdbc:voltdb://192.168.1.5:<port>,192.168.1.6:<port>,<any additional nodes you might have in your cluster>
as the connection url.

AlwaysON SQL Server 2014 Application exception: Failed to update database because database is readonly

We have two nodes availability group. The two nodes being SQL cluster1- node1 and SQL cluster 2- node2 and a Availability group listener. The Java application is connecting to this listener and all is working fine initially i.e application is able to perform both read/writes on the database, untill we do a failover.
The connector string is driverURL=jdbc:jtds:sqlserver://[Listerner DNS Name]:[Port]/[Database]
Say initially the node1 was primary and node2 was the secondary.
After failover, node1 becomes secondary and node2 becomes primary. Now the application is still able to connect to the database but only able to perform reads on the database. The application throws exceptions (which is mentioned in the title) if we try to do inserts on that DB.
Basically what I need is for the application to be able to perform read/writes all the time irrespective of which node is the primary. Any ideas ?
There should be no reason why you get a read-only database when the connection string is pointing to the listener. That's the point of the avail grp listener - to direct flow to the read/write (primary) database. Ping the DNS name and check that it resolves to the listener (before and after an AG failover). Unfortunatelyy I don't use Java so can't help you any further. Cheers, Mark.

add node fails w/ Azure WSFC 2012 for SQL2012 AlwaysOn Availability Grps

Adding node fails Windows Server 2012 Failover Cluster for AlwaysOn Availability Groups in all AZURE, is leaving an apparent phantom VM node. How can I cleanup up?
Server property for target server VM is flagged as "clustered", but is not. There was another node added successfully, but when trying again to add the node , that failed earlier, does not work, as cluster manager reports back that target "xxxxx server is already clustered".
I evicted the the single node, then "destroyed cluster". Then created anewly named cluster. Added one node, but when trying to add the "problem" sql server VM, I get same return msg : "server is already in a cluster". When I remote into the target sql Azure VM, server manager shows the server as "Clustered". I can not find any way to clean this failed operation up.
When I open FO cluster mgr on the SQL VM, I see red-x'ed the cluster name of the cluster I had previously "destroyed". Same VNET, same subnet. Validates OK on cluster build up to point of failure when trying a add 2nd SQL VM node to cluster.
No solution was found after rambling thru msdn, technet. Had to delete azure vm,s completely, but note that in the
same cloud viewed thru ms new portal, parts still are displayed in those pages. Like Loose random balloons drifting around azure chaos...

Resources