Best way to refresh external data in SQL Server - sql-server

My application uses much data from external sources. This data must be updated regularly.
I use the following approach for each table with external data when it's time to update.
Create new table, populate it with data, create indexes.
Delete existing table and rename the new one.
BEGIN TRAN
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
SET XACT_ABORT ON;
EXEC ('drop table TableWithExternalData')
EXEC sp_rename 'new_TableWithExternalData', 'TableWithExternalData'
COMMIT
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
During that, in application logs I can see errors like:
Invalid object name 'dbo.TableWithExternalData'.
or
Invalid object name 'dbo.TableWithExternalData'.
Could not use view or function 'dbo.vSomeView' because of binding errors.
What should I do to update external data without errors?
UPDATE
Reasons for using this method were the following.
It is "official": used by SSDT when generating table change scripts. SSMS uses the same method, but without setting transaction isolation level.
It works very fast, without long-living locks. Unfortunately, I have no info about changes in the datasource, and some of the tables are rather big. Full join for changes, or just truncate && insert is slower.
It is simple to implement and support. In rare cases of schema changes I can just add a column to the newly imported table, and after the scheduled data refresh I have the new column in production.
But it produces runtime errors in application using the database, so I need some other approach.
UPDATE 2
Slightly modified code works.
But still the question is: is that the optimal solution for external data refresh?
Reason of runtime errors: other transactions waiting for object_id, not object name; after table is deleted, there is no more object with the old ID.
Solution: do not delete the old table in the same transaction, but rename it.
More info: http://www.sqlnotes.info/2011/12/02/sp_rename-causes-lock-leaking/
BEGIN TRAN
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
SET XACT_ABORT ON;
EXEC sp_rename 'TableWithExternalData', 'old_TableWithExternalData'
EXEC sp_rename 'new_TableWithExternalData', 'TableWithExternalData'
COMMIT
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
EXEC ('drop table old_TableWithExternalData')

Related

How to prevent deadlock of table in SQL Server

I have a table were values can be altered by different users and records of 100k rows.
I made a stored procedure where in, it has a begin tran and at the last part
to either commit or rollback the changes depending on the situation.
So for now the problem we're encountering is a lock of that table. For example 1st user is executing the stored procedure thru the system, then the other users won't be able to select or also execute the stored procedure because the table is currently locked.
So is there anyway where I can avoid lock other than using dirty read. Or a way where I can rollback the changes made without using begin tran, because it is the main reason why the table is locked up.
Yes, you can at least (quick & dirty) enable SNAPSHOT isolation level for transactions. That will prevent locks inside the transactions.
ALTER DATABASE MyDatabase
SET ALLOW_SNAPSHOT_ISOLATION ON
ALTER DATABASE MyDatabase
SET READ_COMMITTED_SNAPSHOT ON
See for details.

sql server update blocked by another transaction, concurrency in update

Two SP's are getting executed one after another and the second one is getting blocked by first one. They both are trying to update same table. Two SP's are as following
CREATE PROCEDURE [dbo].[SP1]
Begin
SET TRANSACTION ISOLATION LEVEL SNAPSHOT;
BEGIN TRANSACTION ImpSchd
update Table t1 .......... ................................//updating
a set of [n1,n2....n100] records
COMMIT TRANSACTION ImpSchd
SET TRANSACTION ISOLATION LEVEL
READ COMMITTED;
END
2.
CREATE PROCEDURE [dbo].[SP2]
Begin
update Table t1 .......... ................................//updating
a set of [n101,n102.....n200] records
END
My question is when sp1 is running is snapshot level isolation why is it blocking sp2 (n both are updating different set of records)?
If i run first sp for two different set of records simultaneously it
works perfectly.
How can I overcome this situation ?
If using the snapshot level isolation is to be set for each sp updating the same table then it would be a larger change.
if two sp has to update same records in a table, how should i handle that(both sp will update different columns)?
Isolation levels only are for not blocking selects,so any DML wont be affected by Isolation levels.In this case update takes IX lock on table,page followed by taking xlock on row to update.Since you are updating in bulk ,table itself might have been locked due to lock escalation.Hope this helps

Use of transaction in MSSQL, update several tables

I need a sanity check ;[, a customer of mine says he is seeing data at a time when I think he should not.
example, update 2 tables
BEGIN TRANSACTION;
update table1...
update table2...
COMMIT TRANSACTION;
question - it is possible for a separate connection in the database to be triggered to read the updates to table1 before the updates in table2 are done?
Yes you can if you set the isolation level of the other reading transaction to read uncommited. https://msdn.microsoft.com/en-us/library/ms173763(v=sql.110).aspx.
It's easy to test if you start up two Sql Management Studios and run the transaction without commiting in one window, then try to select in the other window with different Isolation Levels.
Yes, it is possible, if your isolation level is set to read uncommitted.
Look at the isolation level provided by:
dbcc useroptions
https://msdn.microsoft.com/en-us/library/ms173763.aspx
http://blog.sqlauthority.com/2010/05/24/sql-server-check-the-isolation-level-with-dbcc-useroptions/

TSQL logging inside transaction

I'm trying to write to a log file inside a transaction so that the log survives even if the transaction is rolled back.
--start code
begin tran
insert [something] into dbo.logtable
[[main code here]]
rollback
commit
-- end code
You could say just do the log before the transaction starts but that is not as easy because the transaction starts before this S-Proc is run (i.e. the code is part of a bigger transaction)
So, in short, is there a way to write a special statement inside a transaction that is not part of the transaction. I hope my question makes sense.
Use a table variable (#temp) to hold the log info. Table variables survive a transaction rollback.
See this article.
I do this one of two ways, depending on my needs at the time. Both involve using a variable, which retain their value following a rollback.
1) Create a DECLARE #Log varchar(max) value and use this: #SET #Log=ISNULL(#Log+'; ','')+'Your new log info here'. Keep appending to this as you go through the transaction. I'll insert this into the log after the commit or the rollback as necessary. I'll usually only insert the #Log value into the real log table when there is an error (in theCATCH` block) or If I'm trying to debug a problem.
2) create a DECLARE #LogTable table (RowID int identity(1,1) primary key, RowValue varchar(5000). I insert into this as you progress through your transaction. I like using the OUTPUT clause to insert the actual IDs (and other columns with messages, like 'DELETE item 1234') of rows used in the transaction into this table with. I will insert this table into the actual log table after the commit or the rollback as necessary.
If the parent transaction rolls back the logging data will roll back as well - SQL server does not support proper nested transactions. One possibility is to use a CLR stored procedure to do the logging. This can open its own connection to the database outside the transaction and enter and commit the log data.
Log output to a table, use a time delay, and use WITH(NOLOCK) to see it.
It looks like #arvid wanted to debug the operation of the stored procedure, and is able to alter the stored proc.
The c# code starts a transaction, then calls a s-proc, and at the end it commits or rolls back the transaction. I only have easy access to the s-proc
I had a similar situation. So I modified the stored procedure to log my desired output to a table. Then I put a time delay at the end of the stored procedure
WAITFOR DELAY '00:00:12'; -- 12 second delay, adjust as desired
and in another SSMS window, quickly read the table with READ UNCOMMITTED isolation level (the "WITH(NOLOCK)" below
SELECT * FROM dbo.NicksLogTable WITH(NOLOCK);
It's not the solution you want if you need a permanent record of the logs (edit: including where transactions get rolled back), but it suits my purpose to be able to debug the code in a temporary fashion, especially when linked servers, xp_cmdshell, and creating file tables are all disabled :-(
Apologies for bumping a 12-year old thread, but Microsoft deserves an equal caning for not implementing nested transactions or autonomous transactions in that time period.
If you want to emulate nested transaction behaviour you can use named transactions:
begin transaction a
create table #a (i int)
select * from #a
save transaction b
create table #b (i int)
select * from #a
select * from #b
rollback transaction b
select * from #a
rollback transaction a
In SQL Server if you want a ‘sub-transaction’ you should use save transaction xxxx which works like an oracle checkpoint.

Do DB locks require transactions?

Is it true that "Every statement (select/insert/delete/update) has an isolation level regardless of transactions"?
I have a scenario in which I have set update of statements inside a transaction (ReadCommitted).
And another set not in a transaction (select statements).
In this case when first set is executing another waits.
If I set READ_COMMITTED_SNAPSHOT for DB Deadlock occurs.
ALTER DATABASE Amelio SET ALLOW_SNAPSHOT_ISOLATION ON
ALTER DATABASE Amelio SET READ_COMMITTED_SNAPSHOT ON
To solve this problem, do I need to put "Select" statements in TransactionScope?
On SQL Server every transaction has an implicit or explicit transaction level. Explicit if called with BEGIN/COMMIT/ROLLBACK TRANSACTION, implicit if nothing like this is issued.
Start your snapshot before the update query starts. Otherwise you give SQL Server no chance to prepare the changed rows into tempdb and the Update query still has the lock open.
Another way without creating a snapshot isolation is to use SELECT <columns> FROM <table> WITH (NOLOCK) which is the way to tell SQL Server to get the rows no matter what (aka READ_UNCOMMITED). As it is a query hint it changes the isolation level even with your settings. Can work if you are not bothered which state of the row is queried - however caution needs to be used when evaluating the data received.

Resources