SQL Server replication model snapshot, transactional and merge - which is best [closed] - sql-server

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am trying to implement sql server database replication between 2 branch servers to a Head Office Server.
My application is a distributed one, the main application is hosted on head office which is controlling the masters and final approvals. The branch servers are located on two other countries, which are used to enter daily transactions.
Since internet bandwidth is too slow, I am planning to run the replication only on off hours (ie.. night 12 AM to Morning 8 AM). During business hours it is difficult to synchronize. All tables are designed such a way to validate and avoid duplication or other errors.
Also there are chances of internet outage for couple of days.. may be up to a week.
There are three type of tables,
BI Directional - Needs to sync between both sides(HO to branch and
branch to HO, Approvals)
Sync from Branch to HO (Transactions)
Sync from HO to Branches (Masters)
When I configure replication, I am confused between different types of replication such as snapshot, transactional and merge replication.
Can anybody suggest which one is the best method for my model
I am also facing some issues with Primary key and foreign keys lost after configuring replication.. Any idea why this is happened..?

Transaction replication is best for one way sync and Merge for Bi directional sync, will be the best options.

Related

How to Synch two databases in Microservice architecture in CQRS separate ones for read/write [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I was asked this question in this interview:
How to Synch two database data? There will be time delays etc. How do we handle?
The background: I mentioned about Microservice architecture and also using CQRS for performance (Separate Read/get query database) and separate write command database.
Now, if the customer enters or modifies data, how it will be replicated/synched in to the read database?
I was talking about stuffs like cosmos db options etc which prevents dirty read etc. I also mentioned about cache. But I am not certain what are all variousoptions to do synch. Interviewer specifically asked me in SQL DB level how do I synch between two DBs.
CQRS is a pattern which dictates that the responsibility of Command and Query operations be seperated.
Now there are multiple ways you can sychronize the data between databases. You can use Master-Slave Configuration or Oplog Replication Mechanism or something very much specific to the database.
But what's more important here is to decide what strategy to use. Since, you are using CQRS pattern now you have more than one data store (write store, read store) and there are fair chances that these data stores are network partitioned. In which case you would have to decide what really matters to you the most Consistency or Availabililty, which is generally goverened by what businesses require.
So in general, what replication strategy is to be used depends on whether your businesses require Consistency or Availabililty.
References:
CAP Theroem: https://en.wikipedia.org/wiki/CAP_theorem
Replication (Driven by CAP Theorem): https://www.brianstorti.com/replication/
There are a couple of options for database syncing in SQL server.
1. SQL Server Always on Feature (SQL 2012 Onwards) - By using this feature, You need to make a primary and secondary replica (could be multiple secondry replica), Once Always On feature is configured, the Second replicas automatically updated based on Primary replica updates. This also provides HADR feature, if the primary replica goes down, the secondary replica will be active and play primary replica role.
https://learn.microsoft.com/en-us/sql/database-engine/availability-groups/windows/overview-of-always-on-availability-groups-sql-server?view=sql-server-2017
2. SQL Server Replication - Merge replication, Transaction replication etc.
https://learn.microsoft.com/en-us/sql/relational-databases/replication/types-of-replication?view=sql-server-2017

Using SQL Service broker vs Queues with web service [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We have a legacy system with central database (SQL Server) and small clients (KIOSK- with local DB (SQL Express)) which is writtent using WPF application. The data sync between client and central DB is done using C#, ADO.NET sql statements. This takes huge toll on the performance. The number of clients currently we have are 400 and it will be increasing. Each client sends 100,000 records per day to the central database.
We are planning to re-write this sync part using SQL Service Broker
One of the main issue, the schema between client and central DB is different. The tables were not normalized and the worst case is most of the columns were using nvarchar datatypes for storing datetime, intergers data.
I am concerned about using Service Broker as most of the business logic would be written using Stored Procedure.
I would like to get some ideas on whether using this technology would be the best or do we need to consider creating REST based service using Message Queue.
TL;DR: I would recommend against using Service Broker in this kind of environment.
Detailed answer
While Service Broker is indeed a very lightweight and reliable communication mechanism, it was designed with a different goal in mind. Namely, it works best in a static topology, when administrators setup everything once and then the entire system runs for years, with little or no changes.
Judging by what I understood from your explanation, your network of connected hosts is much more dynamic, with hosts coming and going on a daily basis. This will incur high maintenance costs on your support, because in order to establish communication between two Service Broker endpoints belonging to different SQL Server instances, you will need (among many other things) to generate at least 1 pair of certificates and exchange their public keys between participating instances, after which they will have to be deployed in both the master and the subject databases on both sides.
This certificate exchange and deployment should be done before Service Broker messaging will be possible, so you will need another communication channel between the servers for the exchange to happen. Normally, this is done manually by DBAs due to high security risks associated with potential loss of transport-level keys. In your environment, however, there is a good chance that people will simply not be able to keep up. Not to mention a potential for human errors, which will be quite high due to large amount of repetitive manual work.
In short, I would recommend to look for something which is easier to deploy and maintain. Change tracking might be a good start; as for transport, you have a full smorgasbord of choices, from WCF to WebAPI (to whatever else have appeared in the last few years).

Is multi-AZ RDS really worth it? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Looking at the prices for RDS instances, the multi-AZ instances cost double. Having a production environment in mind, does it worth it?
What uptime should I expect from a single-AZ instance, as opposed to a multi-AZ one? Has anyone had experience running a production DB on both single and multi availability zones?
We have a multi-AZ production deployment with AWS RDS and it's been working fabulously well for the last 3 years.
The multi-AZ catalog page clearly lists out the benefit of using a multi-az vs single RDS deployment. One of the most important aspects of running multi-az is the fact that if one of the AZ in a region goes down, the production application traffic is automatically routed to the RDS in the alternate AZ. Also, DB maintenance and upgrades are applied to the RDS per AZ basis (for a multi-AZ RDS) without impacting uptime.
With respect to cost, it is totally up to the nature of the application as to how much is the degree of downtime tolerance that it can sustain. It's a cost vs uptime trade-off.

Mirror Live Database Sql Server 2008 R2 Enterprise [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Quick Question: Is it possible to mirror a database without downtime?
Long Question: I have a database on production being used by quite a few clients. The previous management did not implemented any kind of redundancy or high-availability strategy (no clustering... not even storage!!!), and now, as business grows, this is becoming a huge liability... as a emergency act I'm considering mirroring the database... The main problem is that I cannot take down the database. That would imply on some legal/financial problems due to some previous SLA agreements... So, can I mirror a database without taking it down?
Extra info:
The SQL Server version is 2008 R2 Enterprise.
The instance consist of one database only (it's a multi-tenant database)
The database infrastructure consist of one physical server running windows 2008 R2 (standalone server). It's not a cluster nor a VM and theres no storage behind it... all data is inside it's only 2TB disk...
The Database size (.mdf) is about 170 GB...
There's about 100 transactions each second
There's no hours when usage goes down... business are 24/7...
Yes, this totally looks like that environment a developer would create on their machine...
This is all in books online. Hopefully you can try in development, or on your desktop first. There are quite a few things to consider, so you should give it a thorough read.
Yes, you can mirror without taking it down. You restore a backup to the mirror and use NORECOVERY option. You then setup the mirroring and transactions start to move from primary to mirror. You'll need to set up any other things needed on that remote server - logins, jobs, etc.
Transfer logins using this method, so you get matching SIDs between servers.

Database geo-replication with low latency [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have a website which I am currently hosting on a single server in Europe. To improve the latency for non-European users I would like to add local servers in the US and Asia.
Keeping static files in sync is no problem. We add new content only once a day, so a simple rsync cron job will do fine to keep files updated.
What I am totally stuck on is how to handle this on the database side? I would prefer a single master database that holds all user information, so that if a local server ever goes offline we always have the user data in the main server (regardless of backups).
So far we consider 2 options:
A database with Geo Replication support
A database that supports geo replication out of the box. Should be very easy to setup and should have very low latency for DB writes (ie. without having to wait for a 'write success' message on the master server).
Programmatic approach with a master and a local database
A user is visiting from one region at the same time, so we could cook something up that connects to both the master and the local database. At first login all user information would be pulled from the master database and cached in the local database. All data generated by the user from then on, could be stored in the local database and synched back to the master database in the background. Could work, but seems overly complex and hard to fix if something goes out of sync?
A little more background information on the database
our database does a lot of reads and few writes
database performance is not an issue at all. So we are only looking to improve the user experience (lower latency)
a user does not generate much data (10kb in general, 200kb at maximum)
we are not a bank or stock exchange, if some user data is synched back to the master server a minute or even a few minutes later it's not a big problem.
Our questions
is there a name that describes this specific problem? (so I can Google better)
is there a database that does geo replication out of the box without latency penalty? (Couchbase perhaps?)
would the programmatic approach be doable, or will it be a world of pain?
I would be very thankful for any insights, or perhaps a link to an article that covers something like this. I'm sure there are more small scale websites out there which have run into this problem.

Resources