Migrate and update data when switch master and slave database postgres - database

I'm working on postgres replication and pgpool2, it works ok.
Suppose that, I have one master and two slave servers, when master(1) down, pgpool will promote one of two slave servers to master(2).
So, my issue is the time when I stop master(1) server, data still come to master(2) database, and I updated some fields in master(2) db, when I start again with master(1) database, will have two case occur:
The master(2) server will be still kept and sync the data to master(1) database, this situation will lead to all of DDL I updated will be lost.
The master(2) server will be down to slave and will be sync data(by use rsync) from master(1) database, this situation will be lead to all data from user in the time will be lost.
So, have any recommendation or the way to solve it?
Thank you.

Related

Synchronization between SQL Server and SQL Server Express databases

I have a requirement where the client database changes should be sync with the server (centralized). All clients use only SQL Server Express where as the server is of SQL Server 2008 R2. Do we have any way to do this without Microsoft Sync framework? With sync framework, all data starts from the beginning which is taking longer time.
Without knowing what acceptable latency is, what the network latency/reliability is, whether clients are connected/disconnected, and whether or not changes made at the clients need to be sent back to the server, assuming you want to send incremental changes, you have options database and log backup/restore, transactional replication and merge replication. There may be more, but those are the more common solutions that will synchronize two servers.
If you are looking for bi-directional data flow, then merge replication is the appropriate solution.
Merge replication is typically used in server-to-client environments. Merge replication is appropriate in any of the following situations:
Multiple Subscribers might update the same data at various times and propagate those changes to the Publisher and to other Subscribers.
Subscribers need to receive data, make changes offline, and later synchronize changes with the Publisher and other Subscribers.
Each Subscriber requires a different partition of data.
Conflicts might occur and, when they do, you need the ability to detect and resolve them.
The application requires net data change rather than access to intermediate data states. For example, if a row changes five times at a Subscriber before it synchronizes with a Publisher, the row will change only once at the Publisher to reflect the net data change (that is, the fifth value).

Redis inverted replication

I am trying to switch to Redis database and during the replication stage I got some issues.
The problem is that I want to have a single database (central) to which the other separate databases (different data in each one) automatically send the data to.
In the documentation I find that normal replication sends data from one central db (master) to the slaves not the opposite.

SQL Server 2012 replication strategy

I've got SQL2012 running on 2 different servers with public, static IP addresses. I want to implement replication in a way that will keep both servers in sync at all times, regardless of which server is actually receiving the data. I've been reading about the subscriber/publisher model but I'm not exactly sure which should be which. A few facts about our setup:
I'm trying to achieve failover. If server A goes down, I need server B to be operational and have all latest data, or as close as possible. And vice versa. When the server comes back online, I need the replication to get caught up quickly and start working again. I need failures to be graceful, in other words I can't have server A get weird just because server B went offline.
I don't need realtime replication, but close would be nice. If server A was 10 seconds behind server B with data updates, nobody would care. But if it were an hour behind, that would be bad. Fast DB performance is more important that realtime replication, but again, close would be nice.
My database is just shy of 900Mb, and grows by 3Mb per day.
I am looking for advice on the best way to set this up given my setup and needs. Much appreciated.
Since one server will be Primary and the other Failover, use Log Shipping. It will keep two databases the same for all transactions completed on Primary server upto the failure moment. All transactions that have not completed at the moment of failure, will not appear on Failover server, so they should resubmitted by the application and hit Failover server.
Also there should be a Recovery procedure, to ensure than Primary server is up to date.
Useful articles:
Database Mirroring and Log Shipping.
Configure Log Shipping

What type of database replication should I use?

I have 2 databases, one on local server and one on a remote server.
I created a transactional replication publication on the local DB, which feeds the remote DB every minute with whatever updates it gets. So far, this is working perfectly.
However, the local DB needs to get cleaned (all its information deleted) daily. THIS is the part I'm having trouble with, I was expecting a replication mode that would only feed the server DB with the inserts, and make it ignore the part when the local DB gets cleaned. At the moment, the remote DB is also getting cleaned.
Would a different kind of replication help me achieve what I want, or is replication no longer the way to do it?
Have a look at this SO question here

Client-side Replication for SQL Server?

I'd like to have some degree of fault tolerance / redundancy with my SQL Server Express database. I know that if I upgrade to a pricier version of SQL Server, I can get "Replication" built in. But I'm wondering if anyone has experience in managing replication on the client side. As in, from my application:
Every time I need to create, update or delete records from the database -- issue the statement to all n servers directly from the client side
Every time I need to read, I can do so from one representative server (other schemes seem possible here, too).
It seems like this logic could potentially be added directly to my Linq-To-SQL Data Context.
Any thoughts?
Every time I need to create, update or
delete records from the database --
issue the statement to all n servers
directly from the client side
Recipe for disaster.
Are you going to have a distributed transaction or just let some of the servers fail? If you have a distributed transaction, what do you do if a server goes offline for a while.
This type of thing can only work if you do it at a server-side data-portal layer where application servers take in your requests and are aware of your database farm. At that point, you're better off just using a higher grade of SQL Server.
I have managed replication from an in-house client. My database model worked on an insert-only mode for all transactions, and insert-update for lookup data. Deletes were not allowed.
I had a central table that everything was related to. I added a field to this table for a date-time stamp which defaulted to NULL. I took data from this table and all related tables into a staging area, did BCP out, cleaned up staging tables on the receiver side, did a BCP IN to staging tables, performed data validation and then inserted the data.
For some basic Fault Tolerance, you can scheduling a regular backup.

Resources