Database replication - database

OK when working with table creation, is it always assumed that creating a table on one database (the master) mean that the DBA should create the table on the slave as well? Also, if using a master/slave configuration, shouldn't data always be getting replicated from the master to the slave to synch?
Right now the problem I am having is my database has a lot of stuff in the master, but the slave is missing parts that only exist in the master. Is something not configured correctly here?

Depends how the replication is configured. Real time replication should keep the master and slave in sync at all times. "Poors mans" replication is usually configured to sync upon some time interval expiring. This is whats probably happening in your case.

I prefer to rely on CREATE TABLE statements being replicated to set up the table on the slave, rather than creating the slave's table by hand. That, of course, relies on the DBMS supporting this.
If you have data on the master that isn't on the slave, that's some sort of failure of replication, either in setup or operationally.

Any table creation on master is replication on slave. Same goes with the inserting data.
Go through the replication settings in my.cnf file for mysql and check if any database / table is ignored from replicating.

Related

Disable transactions on SQL Server

I need some light here. I am working with SQL Server 2008.
I have a database for my application. Each table has a trigger to stores all changes on another database (on the same server) on one unique table 'tbSysMasterLog'. Yes the log of the application its stored on another database.
Problem is, before any Insert/update/delete command on the application database, a transaction its started, and therefore, the table of the log database is locked until the transaction is committed or rolled back. So anyone else who tries to write in any another table of the application will be locked.
So...is there any way possible to disable transactions on a particular database or on a particular table?
You cannot turn off the log. Everything gets logged. You can set to "Simple" which will limit the amount of data saved after the records are committed.
" the table of the log database is locked": why that?
Normally you log changes by inserting records. The insert of records should not lock the complete table, normally there should not be any contention in insertion.
If you do more than inserts, perhaps you should consider changing that. Perhaps you should look at the indices defined on log, perhaps you can avoid some of them.
It sounds from the question that you have a create transaction at the start of your triggers, and that you are logging to the other database prior to the commit transaction.
Normally you do not need to have explicit transactions in SQL server.
If you do need explicit transactions. You could put the data to be logged into variables. Commit the transaction and then insert it into your log table.
Normally inserts are fast and can happen in parallel with out locking. There are certain things like identity columns that require order, but this is very lightweight structure they can be avoided by generating guids so inserts are non blocking, but for something like your log table a primary key identity column would give you a clear sequence that is probably helpful in working out the order.
Obviously if you log after the transaction, this may not be in the same order as the transactions occurred due to the different times that transactions take to commit.
We normally log into individual tables with a similar name to the master table e.g. FooHistory or AuditFoo
There are other options a very lightweight method is to use a trace, this is what is used for performance tuning and will give you a copy of every statement run on the database (including triggers), and you can log this to a different database server. It is a good idea to log to different server if you are doing a trace on a heavily used servers since the volume of data is massive if you are doing a trace across say 1,000 simultaneous sessions.
https://learn.microsoft.com/en-us/sql/tools/sql-server-profiler/save-trace-results-to-a-table-sql-server-profiler?view=sql-server-ver15
You can also trace to a file and then load it into a table, ( better performance), and script up starting stopping and loading traces.
The load on the server that is getting the trace log is minimal and I have never had a locking problem on the server receiving the trace, so I am pretty sure that you are doing something to cause the locks.

Migrate and update data when switch master and slave database postgres

I'm working on postgres replication and pgpool2, it works ok.
Suppose that, I have one master and two slave servers, when master(1) down, pgpool will promote one of two slave servers to master(2).
So, my issue is the time when I stop master(1) server, data still come to master(2) database, and I updated some fields in master(2) db, when I start again with master(1) database, will have two case occur:
The master(2) server will be still kept and sync the data to master(1) database, this situation will lead to all of DDL I updated will be lost.
The master(2) server will be down to slave and will be sync data(by use rsync) from master(1) database, this situation will be lead to all data from user in the time will be lost.
So, have any recommendation or the way to solve it?
Thank you.

Synching two table in SQL Server

Hi I have two database servers (2 different machine, but on same network).
I have one table in Database_1 and same table in Database_2.
Only Table in DB_1 will be updated by user, table in DB_2 will be used by other user for read only.
I want to program something which can copy the updated record from table in DB_1 to DB_2. I want to make it event based, something like whenever someone insert a record in Table#DB_1, I will get the same the record in Table#DB_2.
Can someone suggest me something?
Depending on the size, frequency of updates, and complexity of your systems, Replication may be the answer you need. Transactional replication sounds the most suitable, from the little detail provided.
How time sensitive is the data? There are two possibilities with this for me.
Suggestion 1: Have triggers to keep the data synced to the table on a linked server.
Suggestion 2: Have a DTS/SSIS package that keeps DB_2 in sync. Schedule the package to run every minute or five minutes depending on what is necessary.
Check out Oracle GoldenGate.
"Oracle GoldenGate provides real-time, log-based change data capture, and delivery between heterogeneous systems. Using this technology, it enables cost-effective and low-impact real-time data integration and continuous availability solutions."

Database replication. 2 servers, Master database and the 2nd is read-only

Say you have 2 database servers, one database is the 'master' database where all write operations are performed, it is treated as the 'real/original' database. The other server's database is to be a mirror copy of the master database (slave?), which will be used for read only operations for a certain part of the application.
How do you go about setting up a slave database that mirrors the data on the master database? From what I understand, the slave/readonly database is to use the master db's transaction log file to mirror the data correct?
What options do I have in terms of how often the slave db mirrors the data? (real time/every x minutes?).
What you want is called Transactional Replication in SQL Server 2005. It will replicate changes in near real time as the publisher (i.e. "master") database is updated.
Here is a pretty good walk through of how to set it up.
SQL Server 2008 has three different modes of replication.
Transactional for one way read only replication
Merge for two way replication
Snapshot
From what I understand, the slave/readonly database is to use the master db's transaction log file to mirror the data correct?
What options do I have in terms of how often the slave db mirrors the data? (real time/every x minutes?).
This sounds like you're talking about log shipping instead of replication. For what you're planning on doing though I'd agree with Jeremy McCollum and say do transactional replication. If you're going to do log shipping when the database is restored every x minutes the database won't be available.
Here's a good walkthrough of the difference between the two. Sad to say you have to sign up for an account to read it though. =/ http://www.sqlservercentral.com/articles/Replication/logshippingvsreplication/1399/
The answer to this will vary depending on the database server you are using to do this.
Edit: Sorry, maybe i need to learn to look at the tags and not just the question - i can see you tagged this as sqlserver.
Transactional replication is real time.
If you do not have any updates to be done on your database , what you need is just retrieving of data say once a day : then use snapshot replication instead of transactional replication. In snapshot replication, changes will replicate when and as defined by the user say once in 24 hrs.

SQL Server 2005 One-way Replication

In the business I work for we are discussion methods to reduce the read load on our primary database.
One option that has been suggested is to have live one-way replication from our primary database to a slave database. Applications would then read from the slave database and write directly to the primary database. So...
Application Reads From Slave
Application Writes to Primary
Primary Updates Slave Automatically
What are the major pros and cons for this method?
A few cons:
2 points of failure
Application logic will have to take into account the delay between writing something and then reading it, since it won't be available immediately from the secondary database
A strategy I have used is to send key reporting data to a secondary database nightly, de-normalizing it on the way, so that beefy queries can run on that database instead of locking up tables and stealing resources from the OLTP server. I'm not using any formal data warehousing or replication tools, rather I identify problem queries that are Ok without up-to-the-minute data and create data structures on the secondary server specifically for those queries.
There are definitely pros to the "replicate everything" approach:
You can run any ad-hoc query on the secondary, since it has all of your data
If your primary server dies, you can re-purpose the secondary quickly to take over
We are using one-way replications, but not from the same application. Our applications are reading-writing to the master database, the data gets synchronized to the replca database, and the reporting tools are using this replica.
We don't want our application to read from a different database, so in this scenario I would suggest using file groups and partitioning on the master database. Using file groups (especially on different drives) and partitioning of files and indexes can help on performance a lot.

Resources