I have a TDengine cluster with 4 nodes, node1 to node4. In configuration file: /etc/taos/taos.cfg, I set two master nodes, fristEP node1, secondEP node2. In order to make sure that when node1 is down, node2 will become the master nodes, the cluster will keep working.
But when I stop taosd in node1, the cluster is down, what is the reason, why node2 cannot work as expected?
node2 cannot work as expected because you only have one mnode configured in TDengine.
In your configuration file: /etc/taos/taos.cfg, set numOfMnodes from 1 to 2 and node 2 will work once node1 is offline.
numOfMnodes 2
Related
I am having a strange issue and am hoping you guys might be able to help!
Problem: I have a 2 node SQL Server 2019 Availability Group Cluster utilising a FSW. Both nodes are using the same DBEngine Service account. and it's been working fine for quite some time.
Today I restarted the passive node DBEngine account . When the node came back up, it was no longer synchronising with node 1. The state of the replica was disconnected, and I could see lots of login failures on Node 1 (active node) SQL Logs.
I found that the DBEngine service account had locked. I had it unlocked, but it soon locked again.
Has anyone got any ideas? Any input would be greatly received!
Steps I tried:
created a new service account to rule out the account being used elsewhere, started both nodes under the new account.... account locked out when node 2 started
unlocked the account, stopped node 2. restarted node 1. Account fine... waited.. account still fine. Started node 2 service... account locked out.
recreated mirroring endpoints on both nodes and reapplied connect permissions to the dbengine service account. - this didn't fix it.
restarted both Servers.
removed the node 2 replica from the availability group, removed all databases (from node 2) and dropped the mirroring endpoint on node 2. restarted node 2 service. - at this point both nodes were happily running under the same service account.
tried re-adding node 2 as a replica using the wizard. It added it, backed up the database, restored to node 2, and got to the very last step where it connects it, and the password locked out again!
The account gets locked if someone is using wrong password.
You can check task scheduler if any task using service account.
If application uses same service account. It could be due to caching of old credentials.
I need to build a cluster with one master node and three worker nodes using TDengine. I followed all the step from the official website (https://www.taosdata.com/en/documentation/cluster), but I still suffered by the offlines from the "show dnodes" command in the taos shell. I think it somehow connected but still miss something. I can use the taos shell to see the cluster status in all worker nodes but just cannot start other slave dnodes. What I did was
clean up all the previous data
use the "create dnode xxx" command in taos shell
modify the FirstEP to the master node for all the taos.cfg in worker nodes
add the internal ip and hostname to each nodes' /etc/hosts
start all the taosd services in all nodes.
you should start all taosd services in all nodes first, then use "create dnode xxx" command in taos shell
I want PostgreSQL Synchronous Streaming Database Replication Status = sync.
I deployed PostgreSQL cluster with 3 node and write sync type - Synchronous. But when i check type SELECT * FROM pg_stat_replication;
- i get first node - sync_state=sync,and other async, what is ? Why its two different type ?
With synchronous streaming replication in PostgreSQL, the commit on the primary is delayed until one of the standby servers has received the corresponding WAL information (the exact meaning of this is configurable with synchronous_commit).
The standby server who first confirms the reception of the WAL information is the one with sync_state 'sync', the other will be 'async'.
I set up pgpool (master-slave mode) + postgresql streaming replication. Then I check the correctness of failover_command. If I turn off the network master node, then pgpool waits specified time, if the master node during this time is not responding, then the failover_command. And if I do restart postgresql master via the init.d script that executes pgpool failover_command immediately. As a result, the master node switches to the slave node and the slave node to the master node. To avoid this, it is necessary to switch off pgpool, and then do a restart the master node and include pgpool. How can this problem be solved?
P.S. Sorry for my English :)
We have two nodes availability group. The two nodes being SQL cluster1- node1 and SQL cluster 2- node2 and a Availability group listener. The Java application is connecting to this listener and all is working fine initially i.e application is able to perform both read/writes on the database, untill we do a failover.
The connector string is driverURL=jdbc:jtds:sqlserver://[Listerner DNS Name]:[Port]/[Database]
Say initially the node1 was primary and node2 was the secondary.
After failover, node1 becomes secondary and node2 becomes primary. Now the application is still able to connect to the database but only able to perform reads on the database. The application throws exceptions (which is mentioned in the title) if we try to do inserts on that DB.
Basically what I need is for the application to be able to perform read/writes all the time irrespective of which node is the primary. Any ideas ?
There should be no reason why you get a read-only database when the connection string is pointing to the listener. That's the point of the avail grp listener - to direct flow to the read/write (primary) database. Ping the DNS name and check that it resolves to the listener (before and after an AG failover). Unfortunatelyy I don't use Java so can't help you any further. Cheers, Mark.