SQL Server Transactions - sql-server

When to use ado.net and SQL Server transactions?
Are there any reasons why one would choose to use ado.net transactions over SQL Server transactions when there is only one SP or query fired against the database within that transactions.

What if you have multiple resources contributing in the transaction then use ADO.NET (and let DTC/Enterprise Services manage it for you), else can use mssql transaction from the SP.

I wouldn't use ADO.NET transactions if using one call to one stored proc which does one query. The purpose of the ADO.NET transaction would be to wrap the execution of multiple calls in to a single transaction. Your case does not fall in to that criteria.

Related

Does sql transactions reliable over internet

i have cloud based solutions (azure function, which reads json from service bus and convert to c# object) which inserts data into on prem sql server with multiple insert statements, are sql transactions reliable and secure over the internet.
what if connection fails during insertion in to database.
When connection fails during insertion, transaction is not complete and no rows are inserted.
When you call INSERT inside transaction, no rows are inserted until you call COMMIT TRANSACTOIN. If you use EF and transaction is not commited on end of using block, rollback is called automatically.
This is one of the typical scenario the transactions are designed for.

Why would a replicated database have less strain?

If it's possible to have a high-level conversation about database replication, that's what I'm looking for.
Imagine I have a replicated read-only database, created with the intent to use it for reporting. The benefit is that users aren't hammering the primary db for reporting data, those queries are offloaded to the secondary database. But if I set up realtime delivery, now that secondary db is getting read requests as well as update statements from the primary. Won't your replicated db be failure prone, to the same point as if you used one db for both transactional and reporting functions?
To put it another way, what is the performance benefit of any of the realtime replication methods (I'm only familiar with log shipping) over common CRUD operations that a read-write, transactional database would run into?
AWithout replication you have:
On the primary server you have:
SELECT statements,
DML statements,
reporting statements.
If you create a replication, then:
On primary you have:
SELECT statements,
DML statements.
On the replicate you have:
DML statements,
reporting statements.
You don not have the load that is generated from SELECT statements on the source. All log-based replication systems do not replicate the SELECT statements to the Replicate System. They do not change the data, and there is no trace of them in the database log.
More advanced replication systems also do not replicate DML statements that have been rolled back (finished with ROLLBACK statement). Only committed statements are replicated (finished with COMMIT statement).

Transaction from vb.net to insert data into different SQL Server

I have two SQL Server. I want to get data from the first one to the second and to update the first with value sent=1 there are not on the same area
What I want to do exactly is a transation in vb.net that can union all the query that will be applicated to the both db
Is it possible because if I put the query of the first db in a transaction and if i make if transaction = true then execute the second truncation (that applicated to the second db) if I lost the connection that may because problem any one can help me to join all the querys on the same transaction
In VB you would use the TransactionScope class to wrap your data access operations. It will create a local transaction to begin with and then promote it to a distributed transaction when required. The SQL Server Distributed Transaction Manager must be active for that to be possible, so that's something for you to read up on.

Using SAVE TRANSACTION with a linked server

Inside a transaction that have a savepoint I have to make a join with a table that is in a linked server. When I try to do it, I get the error message:
“Cannot use SAVE TRANSACTION within a distributed transaction”
The remote table data rarely changes. It is almost fixed. Is is possible to tell SqlServer to exclude this table from the transaction? I've tried a (NOLOCK) hint, but it isn't possible to use this hint for a table in a linked server.
Does anyone knows about a workaround? I'm using the ole SqlServer 2000.
One thing that you could do is to make a local copy of the remote table before you start the transaction. I know that this may sound like a lot of overhead, but remote joins are frequently a performance problem anyway and the SOP fix for that is also to make a local copy.
According to this link, the ability to use SAVEPOINTs in a Distributed transaction was dropped in SQL 7.
To allow application migration from Microsoft SQL Server 6.5 when
savepoints inside distributed transactions are in use, Microsoft SQL
Server 2000 Service Pack 1 introduces a trace flag that allows a
savepoint within a distributed transaction. The trace flag is 8599 and
can be turned on during the SQL Server startup or within an individual
session (that is, prior to enabling a distributed transaction with a
BEGIN DISTRIBUTED TRANSACTION statement) by using the DBCC TRACEON
command. When trace flag 8599 is set to ON, SQL Server allows you to
use a savepoint within a distributed transaction.
So unfortunately, you may either have to drop the bounding ACID transaction, or change the SPROC on the remote server so that it doesn't use SAVEPOINTs.
On a side note (Although I have seen that you have tagged it SQL SERVER 2000) but to make a point that SQL SERVER 2008 has remote proc trans Option for this.
In this case if the distributed table is not too large I would copy it to a temp table. If possible, include any filtering to get the number of rows to a minimum. Then you can proceed normally. Another option since the data changes rarely is copy the data to a permanant table and checking if anything has changed to prevent sending to much data over the network every time you run the transaction. You could only pull over the recent changes.
If you wish to handle transaction from UI level and you have Visual Studio 2008/.net fx 3.5 or + framework then you can wrap your logic with TransactionScope Class. If you dont have any frontends and you are working only on Sql Servers kindly ignore my answer...

Cross-database transactions from one SP

I need to update multiple databases with a few simple SQL statement. The databases are configurared in SQL using 'Linked Servers', and the SQL versions are mixed (SQL 2008, SQL 2005, and SQL 2000). I intend to write a stored procedure in one of the databases, but I would like to do so using a transaction to make sure that each database gets updated consistently.
Which of the following is the most accurate:
Will a single BEGIN/COMMIT TRANSACTION work to guarantee that all statements across all databases are successful?
Will I need multiple BEGIN TRANSACTIONS for each individual set of commands on a database?
Are transactions even supported when updating remote databases? I would need to execute a remote SP with embedded transaction support.
Note that I don't care about any kind of cross-database referential integrity; I'm just trying to update multiple databases at the same time from a single stored procedure if possible.
Any other suggestions are welcome as well. Thanks!
It is possible. You can use explicit BEGIN DISTRIBUTED TRANSACTION in your controlling procedure, or simply start a normal transaction and rely on DTC to elevate your transaction to a distributed one the moment you go across a linked server, this happens automatically. See Transact-SQL Distributed Transactions in MSDN.
However I must warn you that it's a slippery slope. The number of failures and the downtime dramatically increase as soon as you bring DQ (distributed queries) into picture. If you have 99.5% up time servers (ie. 43 hours of downtime a year) and your query touches 5 servers your availability becomes 97.5% (216 hours of downtime a year). With 10 servers it becomes 95% up time (428 hours of down time year). Things like managing an OS patch deployment, a engine SP upgrade or application maintenance (think index rebuilds and such) become a nightmare to orchestrate and coordinate.
The way to go is to decouple the servers, use something like Service Broker instead of DQ.
You should be able to accomplish #1 using a distributed transaction. You will need DTC active and you'll need to use BEGIN DISTRIBUTED TRANSACTION along with ROLLBACK TRANSACTION and COMMIT TRANSACTION within your stored procedure.
Dealing with the DTC can being up a lot of gotchas, so good luck :)

Resources