Hi Stackoverflow community,
Let me ask for your help as I did run into a critical issue.
We have two linked servers and both are Microsoft SQL Servers: CRM and DW servers. Some changes in CRM system triggers a procedure to instantly get updates to DW server, and the way it works is that CRM system calls DW server to update the record. In my case the updates coming from CRM system for CRM and DW sql servers are called simultaneous, and here the problem begins.
DW server tries to read changes and gets records only before transaction begin. Yes, this happens because CRM Server uses:
Read Committed Snapshot On
Unfortunately, we are not able to change isolation level on the CRM sql server. Simple explanation- CRM comes from a third party provider, and they want to limit us to make these possibilities.
Is there any other way, to wait for transaction to commit and then read the latest data after commitment?
If there is a lack of information, please let me know, then I will provide more insights.
I don't understand the control flow here, but from the first paragraph you said updates in the CRM triggers a proc to update the DW server. So, I don't see how the DW server could be updating before the CRM server. You stated they are called simultaneously, but that would negate the comment about the trigger. You wouldn't want the DW to get dirty reads, so READ COMMITTED SNAPSHOT is a good choice here but you can also specify whatever isolation level you want at the transaction level and override the server default.
Since you asked "Is there a way to wait for transaction to commit and then read the latest data after commitment?". Sure, this can be handled in a few ways...
one would be a AFTER INSERT trigger
another would be to add the UPDATE to the DW in a code block after the INSERT statement, in the same procedure. Here, you could use TRY / CATCH and of course, SET XACT_ABORT ON so that if anything fails, the entire transaction is rolled back. Remember, nested transaction aren't real.
Related
We have a user facing web app powered by a SQL Server that allows users top update a table in our SQL Server that also needs to update a document record in our dynamo database table.
How could I reliably ensure that both commits have taken place? We can allow up to a few seconds in latency.
Short answer, you can't.
Dynamo DB doesn't support two-phase (aka distributed) commitment control like most (all?) relational databases do.
Long answer, given that once DDB returns a successful (2xx) response, the record is durable you might consider
start transaction
write to SQL table
write to DDB
if DDB returns 200, commit SQL transaction
else: rollback SQL server transaction.
Another thought, would be to take advantage of DDB streams. Have your app write just to DDB and have another application (Lambda) pick up the change and write to SQL Server.
The first option is "easier", but less robust. No guarantees that something couldn't go wrong (app crashing) between the request to DDB and your app seeing the response. Thus rolling back the SQL while the DDB is updated.
The second option is more work, basically you're building (buying?) a data replication engine from DDB to SQL server. But since the DDB stream data lives for 24hrs you've got that long to fix any problems in SQL server and pickup where you left off.
Why not just leverage the SQL transaction (because dynamo transactions are harder to work with)?
<begin trx>
save item in SQL db
save item in Dynamo DB
<commit trx>
If save in SQL fails it will be rolled back
If SQL save succeeds but dynamo save fails, you get an exception and rollback
If both succeed you commit
Am I missing something
I have a system in place where a SQL Server 2008 R2 database is being mirrored to a second server. The database on the first server has change tracking turned on and on the second server we create a database snapshot of the mirror database in order to pull data for an ETL system with the change tracking functions. Both databases have an isolation level of Read Committed
Occasionally when we pull the change tracking information from the database snapshot,using the changetable function, there is a row that is either an insert or update (i,u) but the row is not in the base table, it is deleted. When I go to check the first server, the changetable function shows only a 'd' row for the same primary key. It appears that the snapshot was taken right at the time the delete happened on the base table but before the information was tracked by change tracking.
My questions are:
Is the mechanism for capturing change tracking in the change tracking internal tables asynchronous?
EDIT:
The mechanism for change tracking is synchronous and a part of the transaction making the change to the table. I found that information here:
http://technet.microsoft.com/en-us/magazine/2008.11.sql.aspx
So that leaves me with these questions.
Why am I seeing the behavior I outlined above? If change tracking is synchronous why would I get inserts/updates from change tracking that don't have a corresponding row in the table?
Would changing the isolation level to snapshot isolation help alleviate the above situation?
EDIT:
After reading more about snapshot isolation recommendation it is recommended so that you can wrap your call to change_tracking_current_version and the call to changetable(changes...) in a transaction so that everything stays consistent. The database snapshot should be doing that for me since it is as of a point in time. I pass in the value from change_tracking_current_version into changetable(changes...) function.
Let me know if you need any more information!
Thanks
Chris
I did some research and haven't found any explanation to this. So, If there's one out there, I'm sorry.
I am using TransactionScope to handle transactions with my SQL Server 2012 database. I'm also using Entity Framework.
The fact is that when I start a new transaction to insert a new record into my table, it locks the entire table, not just that row. So, if I run the Db.SaveChanges(), without committing it, and go to management studio and try to get the the already committed data from the same table, it hangs and return me no data.
What I would like in this scenario is to lock just the new row, not the entire table.
Is that possible?
Thank you in advance.
One thing to be very careful of when using TransactionScope is that it uses Serializable isolation level by default which can cause many locking issues in SQL Server. The default isolation level in SQL Server is Read Committed, so you should consider using that in any transactions that use TransactionScope. You can factor out a method that creates your default TransactionScope and always set to ReadCommitted by default (see Why is System.Transactions TransactionScope default Isolationlevel Serializable). Also ensure that you have a using block when using TransactionScope, to make sure that if errors occur with the transaction processing that the transaction is rolled back (http://msdn.microsoft.com/en-us/library/yh598w02.aspx).
By default, SQL Server uses a pessimistic concurrency model, which means that as DML commands are being processed (inserts, updates, deletes), it will acquire an exclusive lock on the data that is changing, which will prevent other updates or SELECTs from completing until those locks are released. The only way to release those locks is to commit or rollback the transaction. So if you have a transaction that is inserting data into a table, and you run a SELECT * FROM myTable before the insert has completed, then SQL Server will force your select to wait until the open transaction has been commit or rolled back before returning the results. Normally transactions should be small and fast, and you would not notice as much of an issue. Here is more info on isolation levels and locking (http://technet.microsoft.com/en-us/library/ms378149.aspx).
In your case, it sounds like you are debugging, and have hit a breakpoint in the code with the transaction open. For debugging purposes, you can add a nolock hint to your query, which would show the results of the data that has been committed, along with the insert which has not yet been committed. Because using nolock will return UN-committed data, be very careful about using this in any production environment. Here is an example of a query with a nolock hint.
SELECT * FROM myTable WITH(NOLOCK)
If you continue to run into locking issues outside of debugging, then you can also check out snapshot isolation (Great article by Kendra Little: http://www.brentozar.com/archive/2013/01/implementing-snapshot-or-read-committed-snapshot-isolation-in-sql-server-a-guide/). There are some special considerations when using snapshot isolation, such as tempdb tuning.
Inside a transaction that have a savepoint I have to make a join with a table that is in a linked server. When I try to do it, I get the error message:
“Cannot use SAVE TRANSACTION within a distributed transaction”
The remote table data rarely changes. It is almost fixed. Is is possible to tell SqlServer to exclude this table from the transaction? I've tried a (NOLOCK) hint, but it isn't possible to use this hint for a table in a linked server.
Does anyone knows about a workaround? I'm using the ole SqlServer 2000.
One thing that you could do is to make a local copy of the remote table before you start the transaction. I know that this may sound like a lot of overhead, but remote joins are frequently a performance problem anyway and the SOP fix for that is also to make a local copy.
According to this link, the ability to use SAVEPOINTs in a Distributed transaction was dropped in SQL 7.
To allow application migration from Microsoft SQL Server 6.5 when
savepoints inside distributed transactions are in use, Microsoft SQL
Server 2000 Service Pack 1 introduces a trace flag that allows a
savepoint within a distributed transaction. The trace flag is 8599 and
can be turned on during the SQL Server startup or within an individual
session (that is, prior to enabling a distributed transaction with a
BEGIN DISTRIBUTED TRANSACTION statement) by using the DBCC TRACEON
command. When trace flag 8599 is set to ON, SQL Server allows you to
use a savepoint within a distributed transaction.
So unfortunately, you may either have to drop the bounding ACID transaction, or change the SPROC on the remote server so that it doesn't use SAVEPOINTs.
On a side note (Although I have seen that you have tagged it SQL SERVER 2000) but to make a point that SQL SERVER 2008 has remote proc trans Option for this.
In this case if the distributed table is not too large I would copy it to a temp table. If possible, include any filtering to get the number of rows to a minimum. Then you can proceed normally. Another option since the data changes rarely is copy the data to a permanant table and checking if anything has changed to prevent sending to much data over the network every time you run the transaction. You could only pull over the recent changes.
If you wish to handle transaction from UI level and you have Visual Studio 2008/.net fx 3.5 or + framework then you can wrap your logic with TransactionScope Class. If you dont have any frontends and you are working only on Sql Servers kindly ignore my answer...
I have recently tried a big merge of 2 databases. We recreated the schema from Database 2 into Database 1 and created a script to transfer all data from database 2 into Database 1. This script takes about 35 min to run and have transaction handling with:
BEGIN TRANSACTION
...
IF(##error<>0)
COMMIT TRANSACTION
ELSE
ROLLBACK TRANSACTION
The full script is a bit sensitive but here is some SQL that have the same structure: http://pastebin.com/GWJ3ZnkF
We ran the script and all data was transfered without errors. We tested the systems running with the new combined database (removed access rights to the old database).
But as a last task we wanted to take the old database offline to make sure no one used that database. To do this we used:
ALTER DATABASE <dbname> SET OFFLINE WITH ROLLBACK IMMEDIATE
This was bad. After this line of SQL code all data in the combined database that we just copied was suddenly gone. I first asumed it wasn't really finished so the "Rollback immediate" sounds like it have performed a rollback on my transaction..
But why? Wasn't the transaction allready committed?
Also I tried running the same script again a few times but after every attempt no data was copied even if it said the script was successfull. I have no idea why... did it remember my offline rollback somehow?
What is really happening to my connections?
Sounds like you had a pending transaction uncommitted and you forced it to rollback, loosing some of the work. The rest is explained by how your scripts are structured. Is unlikely your script had a single transaction from start to bottom. Only the last transaction was rolled back, so the database was left now in a state in which it is 'half copied'. Probably your script does various checks and this intermediate state sends the script on the 'ELSE' branches where it does not do the proper work (ie. apparently does nothing).
W/o posting the exact script, is all speculation anyway.
Right now you need to restore the database to a consistent state, the one before your data copy. Use the backup you took before the data move (you did take a backup, right?). for extra credit, make sure your script is idempotent and works correctly on a half-updated database.
I'd double-check to make sure that there are no outstanding transactions. Either go through the file and count the number of BEGIN TRANSACTION vs COMMIT TRANSACTION lines, or add a statement to the end of it to SELECT ##TRANCOUNT to ensure that there are no open transactions remaining.
If your data has been committed, there should be no way it can be lost by disconnecting you.
WITH ROLLBACK IMMEDIATE:
All incomplete transactions will be rolled back and any other
connections to the database will be
immediately disconnected.
Sounds like someone got the 2 databases mixed up or maybe there is an outstanding transaction?.... Can you post your entire script?
Ref: ALTER DATABASE.
Rather than only checking ##ERROR, inspect ##TRANCOUNT as well.