I have a WinForms business application that connects to a SQL Server on a server within the business network. Recently we have added an ASP.NET web site so some of the information within the system can be accessed from the Internet. This is hosted on the same server as the SQL Server.
Due to the bandwidth available to the business network from the Internet we want to host the web site with a provider but it needs access to the SQL Server database.
95% of data changes are made by the business using the WinForms application. The web site is essentially a read only view of the data but it is possible to add some data to the system which accounts for the other 5%.
Is replication the best way to achieve the desired result e.g. SQL Server within the business network remains the master database as most changes are made to this and then replicate this to the off site server? If so which type of replication would be the most suitable and would this support replicating the little data entered from the ASP.NET web site back to the main server?
The SQL Server is currently 2005 but can be upgraded as required for any replication requirements.
Are there other solutions to this problem?
Yes, since the web application is causing 5% (max) transaction; you can separate it.
I mean, you can have a different DB which is a carbon copy of the master one and have web application point to this DB.
You can setup a bi-directional transaction replication. So that, transaction made to the master DB will get replicated as well as transaction made to the secondary DB will be replicated as well.
No need of upgrading; as SQL Server 2005 supports replication.
For further information check MSDN on replication here: Bidirectional Transactional Replication
In a Nutshell, here are the steps you would do:
Take a full backup pf the master DB
Restore the DB to newly created DB server
Configure trans replication between them.
For better performance, you can also have the primary DB mirrored onto someother DB server.
Related
We are developing a application has 2.2 million customers. We develop rest api. Transaction report api hit huge table to generate a complex report. If a single time more than 100k customers get this data huge hit my database server. My question is which way we generate report without hit SQL server database. Or world best organization how to manage this type of application database.
We use asp.net mvc5 and SQL server
There are at least two standard methods to "generate report without hit SQL server database".
use snapshot isolation in the same OLTP database to avoid the report locking other writing transactions and ensure to get consistent data
use the data transferred (ETL, log shipping, replication...) from OLTP to other (OLAP) database (i.e. to a warehouse or later in datamarts/cubes)
you can have install other SQL instance in prod and create transition replication on the database. Report on the replicated side on the database.
I'm relatively new to Azure and am having trouble finding what options are out there for connecting to an existing SQL database to push data into it.
The situation is that we have an external client who needs to connect to our Azure SQL database to push data into it, on an on-going basis. We can't give them permission to get into our database, so we're looking at what we can do allow data in. At this point the best option seems to be to create a web service deployed in Azure that will validate the data and then push it into our database.
The question I have is, are there other options to do this in an easier way? Are there Azure services or processes that can be set up to automatically process a file and pull the data into a database? Any other go-between options when each side has their own database and for security reasons can't just open up access to it?
Azure Data Factory works great for basic ETL. If neither party can grant direct access, you can use an intermediate repository like Blob Storage to drop csv/xml/json files for ingestion. If they'll grant you access to pull, you can setup a linked service that more or less functions the same as a linked server in MSSQL. As of the last release ADF now supports Azure hosted SSIS packages too.
I would do this via SSIS using SQL Studio Managemenet Studio (if it's a one time operation). If you plan to do this repeatedly, you could schedule the SSIS job to execute on schedule. SSIS will do bulk inserts using small batches so you shouldn't have transaction log issues and it should be efficient (because of bulk inserting). Before you do this insert though, you will probably want to consider your performance tier so you don't get major throttling by Azure and possible timeouts.
I'm researching the differences between AWS and Azure for my company. We going to make an web-based application. Which is going to be across 3 regions, each region needs to have a MS SQL database.
But I can't figure how to do the following with AWS: the databases need to sync between each region (2 way). So the data stays the same on every Database.
Why we want this? For example a customer* from Eu adds a record to the database. Now this database needs to sync with the other regions. Resulting that a customer form the region US can see the added records. (*Customers can add products to the database)
Do you guys have any idea how we can achieve this?
it's a requirement to use Ms SQL.
If you are using SQL on EC2 instances then the only way to achieve multi-region, multi-master for MS SQL Server is to use Peer-to-Peer Transactional Replication, however it doesn't protect against individual row conflicts.
https://technet.microsoft.com/en-us/library/ms151196.aspx
This isn't a feature of AWS RDS for MS SQL, however there is another product for multi-region replication that's available on the AWS marketplace, but it only works for read replicas.
http://cloudbasic.net/aws/rds/alwayson/
At present AWS doesn't support read replicas for SQL server RDS databases.
However replication between AWS RDS sql server databases can be done using DMS (database migration service). Refer below link for more details
https://aws.amazon.com/blogs/database/introducing-ongoing-replication-from-amazon-rds-for-sql-server-using-aws-database-migration-service/
I am developing an web application in which i need to maintain the website in the local servers itself with the database in the computer itself , the local database will change periodically.There is a central database through which i have to access all the data in all the remaining DB's .
The problem is that even when internet connection is disabled, the local server will update the local database but when when it regains the internet connection it has to update the central database with the local modified data.
The tables( i mean the database schema, table names, attributes all) in all the DB's is same.The data should be appended if added any new ,should be deleted if any deleted and should be modified if any.
I am using MySQL server as DB, Apache Tomcat as server and using JSP, Servlets for business logic.
Please visit http://dev.mysql.com/doc/refman/5.1/en/replication-howto.html
Mysql replication might do the job but there are a few things that you have to consider, like:
the amount of data that has to be synchronized
the OS used on master and slave servers
because of the internet connection issue - why you disable internet connection? one option might be a scheduled job (crontab)
I have one SQL Server Express instance with a pretty normal well formed database. I need to have the data continuously replicated to a SQL Server Express instance on another server.
Now, I know that SQL Server Express does not include the Publisher part of built-in replication, so I'm looking for alternative solutions. I do not want to upgrade any of the databases.
Naturally, I could make my own replication with guids, timestamps etc. and transfer the data using my own coding(as suggested in SQL Server Express database replication/synchronization), but I would want to avoid all that work, especially seeing that the replication is really very basic.
Perhaps a generic trigger added to each table?
Perhaps some kind of database job?
Any suggestions?
You wouldn't be able to utilize any built-in job scheduling, because Express does not ship with SQL Server Agent.
Here's your options as far as I see it:
Write an application that transfers "articles" from your "publisher" db to your "subscriber" db(s)
Create a set of views to have a summation of data that you want to be published. Then create INSTEAD OF triggers on these views (you can't create an AFTER/FOR trigger on a view) to process that data and transfer it to your "subscriber"(s).
Those are both not very intensive tasks. In my opinion, just to have it centralized I would go the first route. That way all of the logic is contained within the application, and your "publisher" database is ignorant to the replication. Not to mention your application could handle an unavailable subscriber pretty easy.