Why would a replicated database have less strain? - database

If it's possible to have a high-level conversation about database replication, that's what I'm looking for.
Imagine I have a replicated read-only database, created with the intent to use it for reporting. The benefit is that users aren't hammering the primary db for reporting data, those queries are offloaded to the secondary database. But if I set up realtime delivery, now that secondary db is getting read requests as well as update statements from the primary. Won't your replicated db be failure prone, to the same point as if you used one db for both transactional and reporting functions?
To put it another way, what is the performance benefit of any of the realtime replication methods (I'm only familiar with log shipping) over common CRUD operations that a read-write, transactional database would run into?

AWithout replication you have:
On the primary server you have:
SELECT statements,
DML statements,
reporting statements.
If you create a replication, then:
On primary you have:
SELECT statements,
DML statements.
On the replicate you have:
DML statements,
reporting statements.
You don not have the load that is generated from SELECT statements on the source. All log-based replication systems do not replicate the SELECT statements to the Replicate System. They do not change the data, and there is no trace of them in the database log.
More advanced replication systems also do not replicate DML statements that have been rolled back (finished with ROLLBACK statement). Only committed statements are replicated (finished with COMMIT statement).

Related

Disable transactions on SQL Server

I need some light here. I am working with SQL Server 2008.
I have a database for my application. Each table has a trigger to stores all changes on another database (on the same server) on one unique table 'tbSysMasterLog'. Yes the log of the application its stored on another database.
Problem is, before any Insert/update/delete command on the application database, a transaction its started, and therefore, the table of the log database is locked until the transaction is committed or rolled back. So anyone else who tries to write in any another table of the application will be locked.
So...is there any way possible to disable transactions on a particular database or on a particular table?
You cannot turn off the log. Everything gets logged. You can set to "Simple" which will limit the amount of data saved after the records are committed.
" the table of the log database is locked": why that?
Normally you log changes by inserting records. The insert of records should not lock the complete table, normally there should not be any contention in insertion.
If you do more than inserts, perhaps you should consider changing that. Perhaps you should look at the indices defined on log, perhaps you can avoid some of them.
It sounds from the question that you have a create transaction at the start of your triggers, and that you are logging to the other database prior to the commit transaction.
Normally you do not need to have explicit transactions in SQL server.
If you do need explicit transactions. You could put the data to be logged into variables. Commit the transaction and then insert it into your log table.
Normally inserts are fast and can happen in parallel with out locking. There are certain things like identity columns that require order, but this is very lightweight structure they can be avoided by generating guids so inserts are non blocking, but for something like your log table a primary key identity column would give you a clear sequence that is probably helpful in working out the order.
Obviously if you log after the transaction, this may not be in the same order as the transactions occurred due to the different times that transactions take to commit.
We normally log into individual tables with a similar name to the master table e.g. FooHistory or AuditFoo
There are other options a very lightweight method is to use a trace, this is what is used for performance tuning and will give you a copy of every statement run on the database (including triggers), and you can log this to a different database server. It is a good idea to log to different server if you are doing a trace on a heavily used servers since the volume of data is massive if you are doing a trace across say 1,000 simultaneous sessions.
https://learn.microsoft.com/en-us/sql/tools/sql-server-profiler/save-trace-results-to-a-table-sql-server-profiler?view=sql-server-ver15
You can also trace to a file and then load it into a table, ( better performance), and script up starting stopping and loading traces.
The load on the server that is getting the trace log is minimal and I have never had a locking problem on the server receiving the trace, so I am pretty sure that you are doing something to cause the locks.

Locking single row / row level when inserting data via TransactionScope, Entity Framework and SQL Server

I did some research and haven't found any explanation to this. So, If there's one out there, I'm sorry.
I am using TransactionScope to handle transactions with my SQL Server 2012 database. I'm also using Entity Framework.
The fact is that when I start a new transaction to insert a new record into my table, it locks the entire table, not just that row. So, if I run the Db.SaveChanges(), without committing it, and go to management studio and try to get the the already committed data from the same table, it hangs and return me no data.
What I would like in this scenario is to lock just the new row, not the entire table.
Is that possible?
Thank you in advance.
One thing to be very careful of when using TransactionScope is that it uses Serializable isolation level by default which can cause many locking issues in SQL Server. The default isolation level in SQL Server is Read Committed, so you should consider using that in any transactions that use TransactionScope. You can factor out a method that creates your default TransactionScope and always set to ReadCommitted by default (see Why is System.Transactions TransactionScope default Isolationlevel Serializable). Also ensure that you have a using block when using TransactionScope, to make sure that if errors occur with the transaction processing that the transaction is rolled back (http://msdn.microsoft.com/en-us/library/yh598w02.aspx).
By default, SQL Server uses a pessimistic concurrency model, which means that as DML commands are being processed (inserts, updates, deletes), it will acquire an exclusive lock on the data that is changing, which will prevent other updates or SELECTs from completing until those locks are released. The only way to release those locks is to commit or rollback the transaction. So if you have a transaction that is inserting data into a table, and you run a SELECT * FROM myTable before the insert has completed, then SQL Server will force your select to wait until the open transaction has been commit or rolled back before returning the results. Normally transactions should be small and fast, and you would not notice as much of an issue. Here is more info on isolation levels and locking (http://technet.microsoft.com/en-us/library/ms378149.aspx).
In your case, it sounds like you are debugging, and have hit a breakpoint in the code with the transaction open. For debugging purposes, you can add a nolock hint to your query, which would show the results of the data that has been committed, along with the insert which has not yet been committed. Because using nolock will return UN-committed data, be very careful about using this in any production environment. Here is an example of a query with a nolock hint.
SELECT * FROM myTable WITH(NOLOCK)
If you continue to run into locking issues outside of debugging, then you can also check out snapshot isolation (Great article by Kendra Little: http://www.brentozar.com/archive/2013/01/implementing-snapshot-or-read-committed-snapshot-isolation-in-sql-server-a-guide/). There are some special considerations when using snapshot isolation, such as tempdb tuning.

Drawbacks of marked transactions in TFS 2010 backups

I wanted to see if you guys are utilizing marked transactions in your TFS backup scenario. Are there any drawbacks or gotchas to consider for this?
If I use the TFS Power Tools to create a backup plan, the following is created for me:
Tables and Stored Procedures needed for marked transactions
Scheduled Jobs
Maintenance Plans for Full, Differential, and Transaction Logs
The Backup/Restore Power Tool relies on SQL marked transactions to
keep consistency across the TFS (and dependency products) databases. Source: http://intovsts.net/tag/tfs-power-tools/
Before inserting named marks into the transaction log, consider the
following: Source: MSDN
Because transaction marks consume log space, use them only for
transactions that play a significant role in the database recovery
strategy.
After a marked transaction commits, a row is inserted in the
logmarkhistory table in msdb.
If a marked transaction spans multiple databases on the same database
server or on different servers, the marks must be recorded in the logs
of all the affected databases.
That kind of settles the matter of marked transactions in my backup plan. Especially since the TFS databases use full recovery mode, and the tool relies on it, there isn't much choice. :)

Cross-database transactions from one SP

I need to update multiple databases with a few simple SQL statement. The databases are configurared in SQL using 'Linked Servers', and the SQL versions are mixed (SQL 2008, SQL 2005, and SQL 2000). I intend to write a stored procedure in one of the databases, but I would like to do so using a transaction to make sure that each database gets updated consistently.
Which of the following is the most accurate:
Will a single BEGIN/COMMIT TRANSACTION work to guarantee that all statements across all databases are successful?
Will I need multiple BEGIN TRANSACTIONS for each individual set of commands on a database?
Are transactions even supported when updating remote databases? I would need to execute a remote SP with embedded transaction support.
Note that I don't care about any kind of cross-database referential integrity; I'm just trying to update multiple databases at the same time from a single stored procedure if possible.
Any other suggestions are welcome as well. Thanks!
It is possible. You can use explicit BEGIN DISTRIBUTED TRANSACTION in your controlling procedure, or simply start a normal transaction and rely on DTC to elevate your transaction to a distributed one the moment you go across a linked server, this happens automatically. See Transact-SQL Distributed Transactions in MSDN.
However I must warn you that it's a slippery slope. The number of failures and the downtime dramatically increase as soon as you bring DQ (distributed queries) into picture. If you have 99.5% up time servers (ie. 43 hours of downtime a year) and your query touches 5 servers your availability becomes 97.5% (216 hours of downtime a year). With 10 servers it becomes 95% up time (428 hours of down time year). Things like managing an OS patch deployment, a engine SP upgrade or application maintenance (think index rebuilds and such) become a nightmare to orchestrate and coordinate.
The way to go is to decouple the servers, use something like Service Broker instead of DQ.
You should be able to accomplish #1 using a distributed transaction. You will need DTC active and you'll need to use BEGIN DISTRIBUTED TRANSACTION along with ROLLBACK TRANSACTION and COMMIT TRANSACTION within your stored procedure.
Dealing with the DTC can being up a lot of gotchas, so good luck :)

Better concurrency in Oracle than SQL Server?

Is it true that better concurrency can be achieved in Oracle databases than in MS SQL Server databases? In particular in an OLTP scenario, such as an ERP system?
I've overheard an SAP consultant making this claim, referring to Oracle locking techniques like row locking and multi-version read consistency and the redo log.
Out of the box, Oracle will have a higher transaction throughput but this is because it defaults to MVCC. SQL Server defaults to blocking selects on uncommitted updates but it can be changed to MVCC as well so that difference should basically go away. See Read Committed Isolation Level.
See Enabling Row Versioning-Based Isolation Levels.
When the ALLOW_SNAPSHOT_ISOLATION
database option is set ON, the
instance of the Microsoft SQL Server
Database Engine does not generate row
versions for modified data until all
active transactions that have modified
data in the database complete. If
there are active modification
transactions, SQL Server sets the
state of the option to PENDING_ON.
After all of the modification
transactions complete, the state of
the option is changed to ON. Users
cannot start a snapshot transaction in
that database until the option is
fully ON. The database passes through
a PENDING_OFF state when the database
administrator sets the
ALLOW_SNAPSHOT_ISOLATION option to
OFF.
He/She was probably referring to the facts that:
In Oracle readers do not block writers and writers do not block readers
Oracle does not maintain a list of row locks so there is no significant overhead in locking and locks never escalate to the table level.
Starting with SQL 2005 this is no longer true - you can enable snapshot isolation and your writers will not block your readers, just like in Oracle.
Sql Server has row locking, several different transaction isolation levels, and a transaction log that can be replayed.
Maybe he's referring to Access, which does not have these.
Or maybe he believes Oracle uses better defaults. He might have a better argument there, but with either DBMS if you're talking ERP you better have a DBA who knows enough about the system to keep it tuned properly.

Resources