What is the optimal way to update a schema on a publishing database that is push-replicated is SQL Server (2012).
Currently we disable replication, update the schema, re-enable replication and run a new snapshot.
As the database grows this strategy will become problematic as the snapshot will get bigger which will make deployments take longer over time.
Is there a way to do this without a new snapshot?
Schema changes can be made using ALTER syntax at the publisher. By default the schema change will be propagated to subscribers automatically, publication property #replicate_ddl must be set to true. There are considerations to make depending on the type of schema change and the publication type. This is covered in Make Schema Changes on Publication Databases and Replicate Schema Changes.
Related
I've created a Data Fusion replication job to replicate some tables on a test database.
It works well at the beginning if I don't change the tables schema. But I've added a new column and that column is ignored from the replication job. I guess that if I create a new table, even that table would be ignored.
Is there a way to include schema updates (new table, update column field, new column etc...) inside an already running Data Fusion replication job?
I guess a possible solution would be to stop the currently running job and create a new one including new tables, new columns etc... but I'd like to avoid that a new job would replicate all the database again.
Any possible solution?
Unfortunately, Data Fusion Replication for SQL server currently does not support DDL propagation during runtime; you will need to delete and recreate the replicator pipeline in order to propagate any changes to schema to the new BigQuery table.
One way to avoid replicating existing data with DDL change is that , you can manually modify the BigQuery table schema (But BigQuery also has limited support for schema changes) and create a new replication job and disable replicating existing data(there is an option that let you choose whether to replicate existing data, default is true)
I have a specific requirement in Transactional Replication, but I am not sure whether it is achievable or not. Could you please help me out if there is any possible way to achieve the same.
Requirement:
As per the requirement, there will be two databases. One is the publication database and another is subscription database.
I want to replicate some of the tables (articles) of the publication database to the subscription database. But what I want is to replicate data only. Because I want to keep those tables (replicating tables) to always present in the subscription database, they may be the empty table initially and when replication starts, these tables may get their data from publication database.
But I don't want the replication to create these tables for me in subscription database. I want to use already created tables. They will have the same schema as publication database tables.
When you configure a publication, you can set the properties for articles. One of the article properties is called Action if name in use. You can set that to the option Keep existing object unchanged.
I don't have much experience on SQL replication(SQL Server 2014). My client have a replication process which was created by his previous contractor. It worked well and suddenly it stopped replicating DDL statements couple of days ago. We have not done any change related to replication. When I checked data, subscriber has received up to date data. Only DDL statements have the problem. It uses transactional replication.
When I searched on web it says that the "Replicate schema changes" option need to set as true on Publication Properties.In my case it was already set to true.
Is there anyway for me to fix this and again have DDL statements to replicate as earlier?
Thank you
SQL Server Replication does support schema changes, but not all of them. In your case, CREATE PROCEDURE is not a supported schema change. Why? It's not an article yet, and not marked for replication, thus it cannot be replicated - replication has no way of knowing whether or not you would want that object replicated.
However, if you create the stored proc, then create an article for it, then issue an ALTER PROCEDURE, you will see the change replicated.
Please see article Make Schema Changes on Publication Databases:
Replication supports a wide range of schema changes to published objects. When you make any of the following schema changes on the appropriate published object at a Microsoft SQL Server Publisher, that change is propagated by default to all SQL Server Subscribers:
ALTER TABLE
ALTER TABLE SET LOCK ESCALATION should not be used if schema change replication is enabled and a topology includes SQL Server 2005 or SQL Server Compact 3.5 Subscribers.
ALTER VIEW
ALTER PROCEDURE
ALTER FUNCTION
ALTER TRIGGER
ALTER TRIGGER can be used only for data manipulation language [DML] triggers because data definition language [DDL] triggers cannot be replicated.
Please ensure you read the whole article, to be fully aware of what can be replicated, and under what circumstances.
We have replication setup on a database and is working fine.
Now we want to update the database on publisher. So using installer we updated the database but we are getting errors like cannot update table as table is in use.
So how can we update the database which is part of replication?
DML changes (insert, update, delete) will work as expected and replicate to subscribers. By default schema changes (DDL) will be propagated to subscribers on synchronization, publication property #replicate_ddl must be set to true. There are some exceptions which can be found in Making Schema Changes on Publication Databases.
Background information
Let's say I have two database servers, both SQL Server 2008.
One is in my LAN (ServerLocal), the other one is on a remote hosting environment (ServerRemote).
I have created a database on ServerLocal and have an exact copy of that database on ServerRemote. The database on ServerRemote is part of a web application and I would like to keep it's data up-to-date with the data in the database ServerLocal.
ServerLocal is able to communicate with ServerRemote, this is one-way traffic. Communication from ServerRemote to ServerLocal isn't available.
Current solution
I thought it would be a nice solution to use replication. So I've made ServerLocal a publisher and subscriptions are pushed to the ServerRemote. This works fine, when a snapshot is transfered to ServerRemote the existing data will be purged and the ServerRemote database is once again an exact replica of the database on ServerLocal.
The problem
Records that exist on ServerRemote that don't exist on ServerLocal are removed. This doesn't matter for most of my tables but in some of my tables I'd like to keep the existing data (aspnet_users for instance), and update the records if necessary.
What kind of replication fits my problem?
Option C: Transactional replication.
I've done this before where you have data in a subscription database and don't want it overwritten by the snapshot. You can set your initial snapshot to not delete the existing records and either don't create the records that are in the publisher (assume they are there) or create the records that are in the publisher (assume they are not there).
Take a look at what would be correct for your situation or leave a comment with more details on how you get your data in the subscriber originally. I'm not too familiar with aspnet_users and what that is. Transactional replication only helps if you don't want the data in the subscriber back at the publisher. If you want that you'll have to do merge replication.