SQL Server Change Tracking on an Azure replicated DB - database

I have a requirement of a syncing set of tables (with data) to a different database from our production database. This is an Azure SQL DB.
I created replicated DB (using Azure Geo Replication) and it is a read-only DB. My plan was to enable SQL Server Change Tracking (CT) in replicated DB and query those changes from Change Tables, so that Production DB will not have any impact because of change tracking. But then I found out it is not possible to enable Change Tracking or even access Change Tables in the DB replica I created.
Then I saw Azure 'Sync to other databases' feature and tried it out with the replicated DB. But it is also not possible since this feature does not support syncing data from a read-only db.
1) What is the solution for this? I cannot afford using 'Sync to other databases' feature on my Production db, because it uses DB Triggers to track these changes.
On the other hand, I cannot afford to enable CT in Production DB either.
2) Is there a way of enabling and tracking changes using CT from a replicated DB?
3) Or is there a way to use 'Sync to other databases' feature with a replicated DB?
The application trying to build is an analytics application. So I am trying to get data I want from a couple of other production DBs.
Thank you.

Related

How to regularly download Geo-Replicated Azure Database (PaaS) data to On-Premise database

We have a geo-replicated database in Azure SQL (Platform as a Service). This is a master/slave type arrangement, so the geo-replicated database is read-only.
We want to download data regularly from this Azure SQL database to a SQL Server database on-premise that has the same schema, without it impacting performance too much (the Azure Database is the main database used by the application).
We originally looked at Azure SQL Data Sync, to hopefully read data from the geo-replicated data and pull that down to on-premise, but it needs to create triggers + tracking tables. I don't feel overly comfortable with this, because it won't be possible to run this against the read-only slave database, and so it must be setup on the transactional master database (impacting application performance), which in turn will re-create these extra data-sync artifacts on the geo-replicated database. It seems messy, with bloated data (we have a large number of tables and data, and Azure PaaS databases are limited in size as it is) and we also use Redgate database lifecycle management, which can potentially blow these schema objects and tracking tables away every time we perform a release, as they're not created by us and are not in our source control.
What other viable options are there (other then moving away from PaaS and making a clustered IaaS VM environment across on-prem and cloud, with SQL Server installed, patched, etc). Please keep in mind, we are resource stretched in terms of staff, which is why PaaS was an ideal place for our database originally.
I should mention, we want the On-Premise database to be 'relatively' in sync with the Azure database, but the data on-premise can be up to an hour old data.
Off the top of my head, some options may be SSIS packages? Or somehow regularly downloading a Bacpac of the database and restoring it on-premise every 30 mins (but it's a very large database).
Note, it only needs to be one-directional at this stage (Azure down to on-premise).
You can give it a try to Azure Data Factory since it allows you to append data to a destination table or invoke a stored procedure with custom logic during copy when SQL Server is used as a "sink". You can learn more here.
Azure Data Factory allows you to incrementally load data (delta) after an initial full data load by using a watermark column that has the last updated time stamp or an incrementing key. The delta loading solution loads the changed data between an old watermark and a new watermark. You can learn more how to do that with Azure Data Factory on this article.
Hope this helps.

Syncing two Azure databases without using Azure Data Sync

I have a huge (500+ table) Azure SQL database (SQL Server). I need to create a clone of this database on Azure, and sync the two databases once daily. The clone is for reporting purposes.
What is the best way to implement the sync, outside of Azure Data Sync? We've experimented with Azure Data Sync, and it's proven unreliable due to the large size of the database.
I've looked into transactional replication, but I cannot find any documentation that states that it is supported from an Azure database to another Azure database. Geo-replication may be another option, though I'm not sure it is a good fit for this use case.
To my knowledge, your best option is Azure Data Factory. It has a very easy to use Wizard as explained here. You can create yourself your copy activities as explained here and here.
You can schedule ADF execution as explained here too.
SQL Data Sync is in Preview and for that reason not recommended for Production environment.
Geo-Replication cannot scheduled for synchronization.
Another option is to use Cross-Database queries as mentioned here, and schedule execution of synchronization procedure created by yourself using elastic jobs or Azure Automation.
Hope this helps.

Syncing ms Sql databases with AWS

I'm researching the differences between AWS and Azure for my company. We going to make an web-based application. Which is going to be across 3 regions, each region needs to have a MS SQL database.
But I can't figure how to do the following with AWS: the databases need to sync between each region (2 way). So the data stays the same on every Database.
Why we want this? For example a customer* from Eu adds a record to the database. Now this database needs to sync with the other regions. Resulting that a customer form the region US can see the added records. (*Customers can add products to the database)
Do you guys have any idea how we can achieve this?
it's a requirement to use Ms SQL.
If you are using SQL on EC2 instances then the only way to achieve multi-region, multi-master for MS SQL Server is to use Peer-to-Peer Transactional Replication, however it doesn't protect against individual row conflicts.
https://technet.microsoft.com/en-us/library/ms151196.aspx
This isn't a feature of AWS RDS for MS SQL, however there is another product for multi-region replication that's available on the AWS marketplace, but it only works for read replicas.
http://cloudbasic.net/aws/rds/alwayson/
At present AWS doesn't support read replicas for SQL server RDS databases.
However replication between AWS RDS sql server databases can be done using DMS (database migration service). Refer below link for more details
https://aws.amazon.com/blogs/database/introducing-ongoing-replication-from-amazon-rds-for-sql-server-using-aws-database-migration-service/

How can I track changes (User Activity) on an MS-SQL database hosted on Amazon RDS?

I have a set of databases on an Amazon RDS Instance. The version is SQL Server 2008 R2 and as far as I understand I cannot simply set up an audit via Management Studio. I am considering creating a table which will be filled by my ASP.Net application upon attempting a query, however this will not track a user that has made changes directly to my databases outside the application.
Does the Amazon console have anything to track database changes/user activity?
Thank you SO.
You might consider using audit triggers on the tables you wish to track.
This article
https://www.simple-talk.com/sql/database-administration/pop-rivetts-sql-server-faq-no.5-pop-on-the-audit-trail/
describes how to implement a simple audit trigger.

Is replication the best method for my scenario?

I have a WinForms business application that connects to a SQL Server on a server within the business network. Recently we have added an ASP.NET web site so some of the information within the system can be accessed from the Internet. This is hosted on the same server as the SQL Server.
Due to the bandwidth available to the business network from the Internet we want to host the web site with a provider but it needs access to the SQL Server database.
95% of data changes are made by the business using the WinForms application. The web site is essentially a read only view of the data but it is possible to add some data to the system which accounts for the other 5%.
Is replication the best way to achieve the desired result e.g. SQL Server within the business network remains the master database as most changes are made to this and then replicate this to the off site server? If so which type of replication would be the most suitable and would this support replicating the little data entered from the ASP.NET web site back to the main server?
The SQL Server is currently 2005 but can be upgraded as required for any replication requirements.
Are there other solutions to this problem?
Yes, since the web application is causing 5% (max) transaction; you can separate it.
I mean, you can have a different DB which is a carbon copy of the master one and have web application point to this DB.
You can setup a bi-directional transaction replication. So that, transaction made to the master DB will get replicated as well as transaction made to the secondary DB will be replicated as well.
No need of upgrading; as SQL Server 2005 supports replication.
For further information check MSDN on replication here: Bidirectional Transactional Replication
In a Nutshell, here are the steps you would do:
Take a full backup pf the master DB
Restore the DB to newly created DB server
Configure trans replication between them.
For better performance, you can also have the primary DB mirrored onto someother DB server.

Resources