SQL Server Replications - Objects to include - sql-server

Are there any SQL Server Replication Best Practices? Are there any links I can read up on?
I’m using 2012 and 2014. I want to know, in general, what type of database objects typically people replicate
from the source instance (publication) to the target instance (subscription)? Table is definitely one of them.
If there are a lot of views associated with the source database and probably not being used in the replicated
database (target/subscription), should I include them in the replication process? What about stored
procedures? In both cases, would it be better just to replicate the data and manually deploy the views and
stored procedures? I’d like to get some ideas/suggestions? Thanks

Here are some resources that cover replication best practices and improving performance:
Best Practices for Replication Administration
Enhance General Replication Performance
You can find a list of database objects that can be published using Replication here:
Publish Data and Database Objects
If the objects are being used at the replicated database (subscriber) then yes, you should replicate the objects. If not, feel free to exclude them from the publication.
The benefit of including them in the publication, rather than manually deploying them, is that replication supports schema changes to published objects and when you make schema changes on appropriate published objects at the Publisher, those changes are propagated by default to all Subscribers. This is covered in Make Schema Changes on Publication Databases.

Related

Keep two different databases synchronized

I'm modeling a new microservice architecture migrating some part of a monolithic software to microservices.
I'm adding a new PostgreSQL database and the idea is in the future use that database but for now I still need to keep updated the old SQL Server database and also synchronize the PostgreSQL database if something new appears in the old database.
I've searched for ETL tools but are meant to move data to a datawarehouse (that's not what I need). I just can't replicate the information because the DB model is not the same.
Basically I need a way to detect new rows inserted in the SQL Server database, transform that information and insert it in my PostgreSQL.
Any suggestions?
PostgreSQL's foreign data wrappers might be useful. My approach would be, to change the frontend to use PostgreSQL and let postgreSQL handle the split via it's various features (triggers, rules, ...)
Take a look at StreamSets Data Collector. It can detect changes in SQL Server and insert/update/delete to any DB that has a JDBC driver including Postgres. It is open source but you can buy support. You can also make field changes/additions/removals/renaming to the data stream so that the fields match the target table.

Copying tables for reporting purposes to a separate database

I have a transactional database (SQL Server 2014) with around 60 tables, and there is a requirement to create a separate reporting database for reporting purposes.
This will only need to run every 24 hours - however I will be needing to move the data into a different, more query-friendly schema!
Because of this I would hope I could just create some Views on the Transactional Db and then create a table based on that view in the Reporting db and copy across the data.
I originally thought of writing a scheduled Windows Service that somehow extracts data from the tables and inserts into the new one, but then thought if the schema changes it has to update in two places, and also thought surely an enterprise SQL Server license must have some tricks.
I then looked into 'database mirroring' on specific tables but this looks to soon be deprecated.
'Log shipping' looks like more of a disaster recovery solution!
Is there an industry 'best' approach to this problem?
You will need to devise an ETL process to extract data from your source database, transform it and load it into your reporting database. There are many tools available to you to make this easier. You can use SSIS, Azure Data Factory for Azure SQL, and there are many other options. You can use the SQL Agent to schedule stored procedures to run your ETL process.
Your target database will look much different than your source database. There is really no quick way (quick as in scheduling a backup) to accomplish this. There is a lot of information on data warehouse and ETL design available to you to assist you in deciding how to proceed.

SQL Server move data between databases

We have a requirement where we will have to move data between different database instance on regular basis. (For e.g. some customers willing to pay more for the better performance). So this is not going to be one off.
The database tables has referential integrity. Is there a way in which this can be done without rewriting sql script (or some other method) every time we migrate customers data?
I came across this How to move data between multiple database's table while maintaining foreign-key relationships/referential integrity?. However it appears that we have write script every time we migrate data (please correct me if I misunderstood the answer on this thread).
Thanks
Edit:
Both servers are using SQL Server 2012 (same version). Its an Azure SQL Server database.
They are not necessarily linked (no firewall between them)
We are only transferring some data, not the whole database. This is only for certain customers who opted pay more.
The schema are exactly same in both databases.
Preyash - please see the documentation on the Split-Merge tool. The Split-Merge tool enables you do move data between databases, as you have described, based on a sharding key (e.g., customer ID). One modification that you will need for your application is to add a shard map (i.e., a database that understand the global state of which customers resides in which databases).
Have a look into Azure Data Sync. It is much more aligned with your requirements. But you may end up in having another SQL Azure DB to maintain a Hub. Azure data Sync follows hub-spoke pattern and will let you do all flexible directional syncs with a few minutes of syncing gap. It is more simple and can set it up very fast without any scripts and all as you wanted.

Can I store any custom tables in SharePoint system database?

Can I store any custom tables in SharePoint's own database?
Is this supported behavior or not?
(I mean tables in MS SQL database, not SharePoint lists.)
If I can, how well does this play with backup/restore functionality?
What are possible caveats?
For anyone wondering why I'm asking: there's an app which is bound to SharePoint server and needs to store some purely relational internal information that doesn't make sense apart from that SharePoint instance. I would like to narrow down data storage to one place but I'm not sure if SharePoint likes its database being used for other purposes.
I'm using SharePoint 2007.
Is it possible? Sure. Should you? Nope.
The SharePoint content/configuration databases are subject to change with any update Microsoft releases, and any changes you make will very likely be destroyed, and if your farm depends on them, be left non-functional.
If you want to store purely relational data in a set of tables, just create another database. There's nothing stopping you from using the same SQL Server instance that houses your SharePoint content and/or configuration databases to store other relational databases as well.
Not a good idea: Support for changes to the databases used by Windows Sharepoint Services
...
Making any modification to the database schema
Adding tables to any of the databases
...
If an unsupported database modification is discovered during a support call, the customer must perform one of the following procedures at a minimum:
Perform a database restoration from the last known good backup that did not include the database modifications
Roll back all the database modifications
It is even worse than the above. It is likely that future upgrades will notice your changes to the content database schema and refuse to upgrade the database period.

How to automatically synchronize tables in different databases

Little background: I'm working in a large company with a lot of branches. We have several applications with separated databases sometimes on different servers. But every database contains a table with a list of branches and their relationships. I want to automatically synchronize these tables when one of them changed.
My question is: what are the best practices of automatic synchronization of tables in different databases (Microsoft SQL Server 2008)?
Are there sql server features for that purpose? Or external tool is a good way? Or it's better to write a small application and run it as a service or use the scheduler?
You can use replication (a SQL server built-in feature) to synchronize different databases.
You can also use triggers or log shipping to sync your tables as records are added ,updated or deleted:
Here are some links about replication.
Here are some links about log shipping.

Resources