Drawbacks of marked transactions in TFS 2010 backups - sql-server

I wanted to see if you guys are utilizing marked transactions in your TFS backup scenario. Are there any drawbacks or gotchas to consider for this?

If I use the TFS Power Tools to create a backup plan, the following is created for me:
Tables and Stored Procedures needed for marked transactions
Scheduled Jobs
Maintenance Plans for Full, Differential, and Transaction Logs
The Backup/Restore Power Tool relies on SQL marked transactions to
keep consistency across the TFS (and dependency products) databases. Source: http://intovsts.net/tag/tfs-power-tools/
Before inserting named marks into the transaction log, consider the
following: Source: MSDN
Because transaction marks consume log space, use them only for
transactions that play a significant role in the database recovery
strategy.
After a marked transaction commits, a row is inserted in the
logmarkhistory table in msdb.
If a marked transaction spans multiple databases on the same database
server or on different servers, the marks must be recorded in the logs
of all the affected databases.
That kind of settles the matter of marked transactions in my backup plan. Especially since the TFS databases use full recovery mode, and the tool relies on it, there isn't much choice. :)

Related

SQL Transactional Replication - initial snapshot places table locks?

When the initial snapshot is being generated while configuring SQL Server
Transactional Replication, does anyone know if the snapshot agent places locks on the tables (articles) you have selected? I have some tables that contain 2+ millions rows and wanted to know if SQL Server actually places table locks to prevent updates while the publishing database is online. If locks are placed, then I want to run the initial snapshot during off peak hours in production.
Thanks!
In Transactional replication or any other type of replication the starting point is a snapshot of the database. The initial step of creating the snapshot is exactly the same in any type of the replication.
SQL Server does not obtain any kind of locks at all when creating a snapshot, it literally is a snapshot of the database at a certain point in time and creating snapshot does not interfere with any transactions. Uncommitted transactions are rolled back in the snapshot once it is created.
To read more about how database snapshot works read this article from MSDN How Database Snapshots Work
If you're running on an edition of SQL Server that supports database snapshots (as in create database [foo]... as snapshot of [bar]), then you can optionally use those as the basis of the snapshot. Check the #sync_method parameter of sp_addpublication. The caveat is that you still probably want to do it during a non-busy time of the day because of how database snapshots work (i.e. copy-on-write will slow down any write activity), but you won't be contending on locks.
Starting SQL Server 2005, the default #sync_method value for sp_addpublication is "concurrent", which means the tables are not locked during snaphsot agent run. Note this is not entirely true - the snapshot agent places schema locks on the tables, but the duration of that lock is mere seconds at most.
So if you set #sync_method = "concurrent", then no, updates, in theory, will not be blocked. If #sync_method = "native" (default in SQL Server 2000) or "character", then yes, updates will be blocked.

what is the best way to replicate database for SSRS

I have installed SQL server database (mainserver) in one instance and SQL server database for RerportServer in others. what is the best way to replicate data from mainServer to report Server? Data in mainServer changes frequently and actual information in the ReportSever is very important.
And there is many ways to do this:
mirroring
shipping log
transactional replication
merge replication
snapshot replication
are there some best-practices about this?
Thanks
You need Transactional Replication for your case. Here is why you would not need the other 4 cases:
Mirroring
This is generally used to increase the availability of a database server and provides for automatic failover in case of a disaster.
Typically even though you have more than a single copy of the database (recommended to be on different server instances), only one of them is active at a time, called the principle server.
Every operation on this server instance is mirrored on the others continuously (as soon as possible), so this doesn't fit your use case.
Log Shipping
In this case, apart from the production database servers, you have extra failover servers such that the backup of the production server's database, differential & transactional logs are automatically shipped (copied) to the failovers, and restored.
The replication here is relatively scheduled to be at a longer interval of time than the other mechanisms, typically ranging from an hour to a couple of hours.
This also provides for having the failver servers readies manually in case of a disaster at the production sites.
This also doesn't fit your use case.
Merge Replication
The key difference between this and the others is that the replicated database instances can communicate to the different client applications independent of the changes being made to each other.
For example a database server in North America being updated by clients across Americas & Europe and another one in Australia being updated by clients across the Asia-Pacific region, and then the changes being merged to one another.
Again, it doesn't fit your use case.
Snapshot Replication
The whole snapshot of the database is published to be replicated to the secondary database (different from just the log files being shipped for replication.)
Initially however, for each type of replication a snapshot is generated to initialized the subscribing database, i.e only once.
Why you should use Transactional Replication?
You can choose the objects (Tables, Views, etc) to be replicated continuously, so if there are only a subset of the tables which are used to reporting, it would save a lot of bandwidth. This is not possible in Mirroring and Log Shipping.
You can redirect traffic from your application to the reporting server for all the reads and reports (which you can also do in others too, btw).
You can have independent batch jobs generating some of the more used reports running on the reporting server, reducing the load on the main server if it has quite frequent Inserts, Updates or Deletes.
Going through your list from top to bottom.
Mirroring: If you mirror your data from your mainServer to your reportServer you will not be able to access your reportServer. Mirroring puts the mirrored database into a continuous restoring state. Mirroring is a High Availability solution. In your case the reportServer will only be available to query if you do a fail over. The mirrored server is never operational till fail over. This is not what you want as you cannot use the reportServer till it is operational.
Log Shipping: Log shipping will allow you to apply transactional log backups on a scheduled event to the reportServer. If you backup the transaction log every 15 minutes and apply the data to the reportServer you will have a delay of 15+ minutes between your mainServer and Log server. Mirroring is actually real time log shipping. Depending on how you setup log shipping your client will have to disconnect while the database is busy restoring the log files. Thus during a long restore it might be impossible to use reporting. Log Shipping is also a High Availability feature and not really useful for reporting. See this link for a description of trying to access a database while it is trying to restore http://social.msdn.microsoft.com/forums/en-US/sqldisasterrecovery/thread/c6931747-9dcb-41f6-bdf4-ae0f4569fda7
Replication : I am lumping all the replication together here. Replication especially transactional replication can help you scale out your reporting needs. It would generally be mush easier to implement and also you would be able to report on the data all of the time where in mirroring you cant report on the data in transaction log shipping you will have gaps. So in your case replication makes much more sense. Snapshot replication would be useful if your reports could be say a day old. You can make a snapshot every morning of the data you need from mainServer and publish this to the subscribers reportServer. However if the database is extremely large then Snapshot is going to be problematic to deal with on a daily basis. Merge replication is only usefull when you want to update the replicated data. In your case you want to have a read only copy of the data to report on so Merge replication is not going to help. Transactional Replication would allow you to send replications across the wire. In your case where you need frequently updated information in your reportServer this would be extremely useful. I would probably suggest this route for you.
Just remember that by implementing the replication/mirroring/log shipping you are creating more maintenance work. Replication CAN fail. So can mirroring and so can transaction log shipping. You will need to monitor these solutions to make sure they are running smoothly. So the question is do you really need to scale out your reports to another server or maybe spend time identifying why you cant report on the production server?
Hope that helps!

SQL Server 2008 Backup Transaction Logs

I understand that the transaction logs keep a record of historical transactions in order to facilitate a restore if needed. However do I need to keep creating transaction log backups for inactive databases that are hanging around on the server? No DDL statements are run against them and they are just used for reference.
I am just a bit worried that I might run out of log space if I get this wrong.
Have you considered changing the recovery model of your databases to the SIMPLE recovery model? Doing so would negate the need to backup the transaction log as it would be automatically re-used in the "unlikely" event that you need it to be.
I would still advise that regular FULL database backups be taken.
Also, if these database are indeed true read only databases then why not consider setting them to be so. This action would have the advantage of immediately highlighting any queries/users that are "still" issuing DML operations when you believe there to be none.
Other options for identifying queries that are performing more than just READ operations include running a Profiler Trace of activity on your database server and also an aggressive option would be to revoke all data modification rights from the relevant database Users.
Transaction logs are actually truncated when they're backed up. So, if these databases are actually inactive, you shouldn't be backing up any transaction logs for them since the logs would be empty.
Also, common practice for "inactive" databases would be to make them READ ONLY with a SIMPLE recovery model.

Cross-database transactions from one SP

I need to update multiple databases with a few simple SQL statement. The databases are configurared in SQL using 'Linked Servers', and the SQL versions are mixed (SQL 2008, SQL 2005, and SQL 2000). I intend to write a stored procedure in one of the databases, but I would like to do so using a transaction to make sure that each database gets updated consistently.
Which of the following is the most accurate:
Will a single BEGIN/COMMIT TRANSACTION work to guarantee that all statements across all databases are successful?
Will I need multiple BEGIN TRANSACTIONS for each individual set of commands on a database?
Are transactions even supported when updating remote databases? I would need to execute a remote SP with embedded transaction support.
Note that I don't care about any kind of cross-database referential integrity; I'm just trying to update multiple databases at the same time from a single stored procedure if possible.
Any other suggestions are welcome as well. Thanks!
It is possible. You can use explicit BEGIN DISTRIBUTED TRANSACTION in your controlling procedure, or simply start a normal transaction and rely on DTC to elevate your transaction to a distributed one the moment you go across a linked server, this happens automatically. See Transact-SQL Distributed Transactions in MSDN.
However I must warn you that it's a slippery slope. The number of failures and the downtime dramatically increase as soon as you bring DQ (distributed queries) into picture. If you have 99.5% up time servers (ie. 43 hours of downtime a year) and your query touches 5 servers your availability becomes 97.5% (216 hours of downtime a year). With 10 servers it becomes 95% up time (428 hours of down time year). Things like managing an OS patch deployment, a engine SP upgrade or application maintenance (think index rebuilds and such) become a nightmare to orchestrate and coordinate.
The way to go is to decouple the servers, use something like Service Broker instead of DQ.
You should be able to accomplish #1 using a distributed transaction. You will need DTC active and you'll need to use BEGIN DISTRIBUTED TRANSACTION along with ROLLBACK TRANSACTION and COMMIT TRANSACTION within your stored procedure.
Dealing with the DTC can being up a lot of gotchas, so good luck :)

Better concurrency in Oracle than SQL Server?

Is it true that better concurrency can be achieved in Oracle databases than in MS SQL Server databases? In particular in an OLTP scenario, such as an ERP system?
I've overheard an SAP consultant making this claim, referring to Oracle locking techniques like row locking and multi-version read consistency and the redo log.
Out of the box, Oracle will have a higher transaction throughput but this is because it defaults to MVCC. SQL Server defaults to blocking selects on uncommitted updates but it can be changed to MVCC as well so that difference should basically go away. See Read Committed Isolation Level.
See Enabling Row Versioning-Based Isolation Levels.
When the ALLOW_SNAPSHOT_ISOLATION
database option is set ON, the
instance of the Microsoft SQL Server
Database Engine does not generate row
versions for modified data until all
active transactions that have modified
data in the database complete. If
there are active modification
transactions, SQL Server sets the
state of the option to PENDING_ON.
After all of the modification
transactions complete, the state of
the option is changed to ON. Users
cannot start a snapshot transaction in
that database until the option is
fully ON. The database passes through
a PENDING_OFF state when the database
administrator sets the
ALLOW_SNAPSHOT_ISOLATION option to
OFF.
He/She was probably referring to the facts that:
In Oracle readers do not block writers and writers do not block readers
Oracle does not maintain a list of row locks so there is no significant overhead in locking and locks never escalate to the table level.
Starting with SQL 2005 this is no longer true - you can enable snapshot isolation and your writers will not block your readers, just like in Oracle.
Sql Server has row locking, several different transaction isolation levels, and a transaction log that can be replayed.
Maybe he's referring to Access, which does not have these.
Or maybe he believes Oracle uses better defaults. He might have a better argument there, but with either DBMS if you're talking ERP you better have a DBA who knows enough about the system to keep it tuned properly.

Resources