Opengauss+keepalived active/standby switchover, and the active/standby replication relationship is lost - keepalived

Opengauss + keepalived active/standby switch, and the active/standby replication relationship is lost.
Use opengauss + keepalived to build a high availability environment for simple HA.
Process: After the failure of the primary simulation, the vip also drifts to the standby database. Check the status of the standby database, which has changed from standby to primary. Then, restart the master database, and the master database will preempt back to the vip, which will also drift to the master database. But before, through gs_ The master-slave replication relationship built by ctl build D/gaussdb/data/db1 - M standby is gone, so you need to manually rebuild the relationship.
After the primary database of opengauss is restored, does the previous active/standby replication relationship really disappear? It cannot be self created or automatically modified. Can you only manually re create the relationship?
Is there any solution to automatically modify or create a master-slave replication relationship after the failure recovery?

Keepalived.conf configuration file
Use the nopreempt parameter to set it to the non preemptive mode, so that after the master database recovers from the failure, the VIP will not be retrieved from the new master database. However, you need to set the master and backup states to backup.


What HA DR options allow the secondary server to be queryable?

I want to implement solution such that the secondary server database is queryable.
Amongst the following options:
Log shipping
Transactional replication
Database mirroring
Always on failover clustering
Always on availability groups
Which of the above methods allows the secondary server database to be online and queryable?
Are there any more options other than the above 5 ?
Log shipping
Secondary Database is "normally" in a recovering state so that additional backups can be applied
Secondary database can be made readable between restores by restoring with STANDBY
Users on secondary must be disconnected so that the next restore can take place
Transactional replication
Primary and Secondary databases are independent and can have transactions on bot sides
Secondary database is not restricted to READ-ONLY. Users can make updates if they have permissions
Be wary of updates on the Secondary that could cause issues with future replication updates from the Primary
Secondary is always online
Database mirroring
Deprecated technology
Requires a Database Snapshot to be taken to allow a queryable copy
Snapshot Database will have a different name from the Primary and Secondary
No updates applied to Snapshot
Snapshot will increase in size as changes made to Secondary because of copy-on-write process
Need to drop the snapshot (disconnecting users) and recreate it to get newer data
Always on failover clustering
There is only 1 copy of the database in Failover Clustering - it is the ownership of the storage that changes on a failover. This has no Secondary to make available for querying
Always on availability groups
Allows a Secondary to be set to read-only
Continuously updated from Primary
Be wary also of license restrictions if you make secondary copies queryable.

Avoiding data duplication with Logical replication ( PostgreSQL 10)

I've configured two servers with redundancy setup using pcsd configuration.
Both machines consists of Postgres 10 and logical replication. Used below steps for logical replication setup.
Took PG Dump on Server1 using pg_dump command.
Restored it on Server2 with postgres 10 using pg_restore.
Made changes in pg_hba.conf and postgres.conf files.
Used below commands for setup of logical replication.
CONNECTION 'host=Server1 port=5432 password=postgres user=postgres dbname=database1'
PUBLICATION my_publication WITH (copy_data = false);
Restarted both servers.
After above steps I could see services are running fine on both the systems(Redundant systems). But from the logs I could see below error messages.
2020-01-08 15:14:08.551 EET >LOG: logical replication apply worker for subscription "my_subscription" has started
2020-01-08 15:14:08.559 EET >ERROR: duplicate key value violates unique constraint "pk_xyz_instance"
2020-01-08 15:14:08.559 EET >DETAIL: Key (xyz_instance_id)=(103) already exists.
2020-01-08 15:14:08.560 EET >LOG: worker process: logical replication worker for subscription 23176 (PID 7411) exited with exit code 1
As I need earlier data of Server1, I took dump and restored it on other and using copy_data as false to avoid duplication.
After every switchover of services from Server1 to Server2 or vice versa, these unique constraint violation errors are seen on Server2 (where services are inactive state)
Is there anything I'm missing here in setup of replication using PostgreSQL 10.11?
Is copy_data flag not working as I expected?
With asynchronous replication, it can always happen that the standby is lagging at the point of failover and some transactions are lost. If you try to use the old primary server, which may be some transactions ahead, as new standby, the databases can be inconsistent and replication conflicts like you observe can happen.
One solution would be to use synchronous logical replication, but that reduces availability unless you have more than one standby server.
The best would be to use physical replication. Not only is it simpler and more performant, but you can also use pg_rewind to quickly turn an old primary server into a new standby server.

cdc ON secondary database in SQL Server

I have a cluster of databases, one primary and two secondary. I need to enable CDC on a database, but I want to enable it on one of the secondary databases to eliminate any resource consumption on the primary database (similar to SQL Server secondary database backup). Is this possible to do it and how? If not: can you tell me the best practices for enabling CDC on cluster?
I want to enable it on one of the secondary databases to eliminate any resource consumption on the primary database
This is not possible. CDC writes the changes back to system tables in the target database, so must run against the primary replica. See Replication, change tracking, & change data capture - Always On availability groups

Recovering from an Azure SQL Database failover

I'm designing my Azure Website for High Availability. In the interesting of doing so, I read the following:
In particular, I'm attempting to use Pattern #1. In short, you:
Establish your primary site and a backup site.
Your backup site remains active, but never used directly while your primary site is functional.
All database transactions are replicated from the primary to the secondary as they occur
When a failover occurs, your traffic only hits your backup site.
My question is: after you failover, how would you return to your primary site? If database transactions were written to your secondary database, they'd need to be written back to the primary. Would you use "Geo Restore" ( and restore your backup over your primary, then update the Azure Traffic Manager to begin using your primary location again?
As I understand correctly, the 'syncing back' is done automatically.
See here
When the failed primary recovers and is available again, the system will automatically mark it as a secondary and bring it up-to-date with the new primary.
And then, yes, I would update the Traffic Manager to route to the original primary again.

Database replication

OK when working with table creation, is it always assumed that creating a table on one database (the master) mean that the DBA should create the table on the slave as well? Also, if using a master/slave configuration, shouldn't data always be getting replicated from the master to the slave to synch?
Right now the problem I am having is my database has a lot of stuff in the master, but the slave is missing parts that only exist in the master. Is something not configured correctly here?
Depends how the replication is configured. Real time replication should keep the master and slave in sync at all times. "Poors mans" replication is usually configured to sync upon some time interval expiring. This is whats probably happening in your case.
I prefer to rely on CREATE TABLE statements being replicated to set up the table on the slave, rather than creating the slave's table by hand. That, of course, relies on the DBMS supporting this.
If you have data on the master that isn't on the slave, that's some sort of failure of replication, either in setup or operationally.
Any table creation on master is replication on slave. Same goes with the inserting data.
Go through the replication settings in my.cnf file for mysql and check if any database / table is ignored from replicating.