The following (sanitized) code sometimes produces these errors:
Cannot drop the table 'database.dbo.Table', because it does not exist or you do not have permission.
There is already an object named 'Table' in the database.
begin transaction
if exists (select 1 from database.Sys.Tables where name ='Table')
begin drop table database.dbo.Table end
Select top 3000 *
into database.dbo.Table
from OtherTable
commit
select * from database.dbo.Table
The code can be run multiple times simultaneously. Anyone know why it breaks?
Can I ask why your doing this first? You should really consider using temporary tables or come up with another solution.
I'm not positive that DDL statments behave the sameway in transactions as DML statements and have seen a blog post with a weird behavior and creating stored procedures within a DDL.
Asside from that you might want to verify your transaction isolation level and set it to Serialized.
Edit
Based on a quick test, I ran the same sql in two different connections, and when I created the table but didn't commit the transaction, the second transaction blocked. So it looks like this should work. I would still caution against this type of design.
In what part of the code are you preventing multiple accesses to this resource?
begin transaction
if exists (select 1 from database.Sys.Tables where name ='Table')
begin drop table database.dbo.Table end
Select top 3000 *
into database.dbo.Table
from OtherTable
commit
Begin transaction isn't doing it. It's only setting up for a commit/rollback scenario on any rows added to tables.
The (if exists, drop) is a race condition, along with the re-creation of the table with (select..into). Mutiliple people dropping into that code all at once will most certainly cause all kinds of errors. Some creating tables that others have just destroyed, others dropping tables that don't exist anymore, and others dropping tables that some are busy inserting into. UGH!
Consider the temp table suggestions of others, or using an application lock to block others from entering this code at all if the critical resource is busy. Transactions on drop/create are not what you want.
If you are just using this table during this process I would suggest using a temp table or , depending on how much data , a ram table. I use ram tables frequently to avoid any transaction costs and save on disk activity.
Related
I have created a database transaction and I am inserting records in Table1 of H2 DB. But no commits done yet.
In between this process, after executing half of the records, I execute one create statement(created Table2).
Table2 is created and along with it, previous INSERT statements are also getting committed in DB.
After this, I'm inserting more records in Table1, if there is a failure in insertion, I still see records in Table1 which were inserted before create statement for Table2.
Due to this, I see some records in DB even after transaction failure. I was expecting ZERO records in DB.
Why is this happening?
Because create table is a DDL statement and no DML statement. And DDL statement usually commit any open transaction.
If you want to avoid this you should create all objects you need during the import before you import the first record.
EDIT 2019-03-22
Although this topic is a bit old I like to mention one thing which could help. You could create a procedure which uses PRAGMA AUTONOMOUS_TRANSACTION which executes an sql statement via execute immediate
PROCEDURE exec_sql_autonomous(p_sql VARCHAR2)
AS
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
EXECUTE IMMEDIATE p_sql;
COMMIT;
EXCEPTION
WHEN OTHERS
THEN
ROLLBACK;
RAISE;
END;
This way you may be able to create a table while the data inserting transaction is in progress without committing it due to the table creation.
In MS SQL Server, I'm using a global temp table to store session related information passed by the client and then I use that information inside triggers.
Since the same global temp table can be used in different sessions and it may or may not exist when I want to write into it (depending on whether all the previous sessions which used it before are closed), I'm doing a check for the global temp table existence based on which I create before I write into it.
IF OBJECT_ID('tempdb..##VTT_CONTEXT_INFO_USER_TASK') IS NULL
CREATE TABLE ##VTT_CONTEXT_INFO_USER_TASK (
session_id smallint,
login_time datetime,
HstryUserName VDT_USERNAME,
HstryTaskName VDT_TASKNAME,
)
MERGE ##VTT_CONTEXT_INFO_USER_TASK As target
USING (SELECT ##SPID, #HstryUserName, #HstryTaskName) as source (session_id, HstryUserName, HstryTaskName)
ON (target.session_id = source.session_id)
WHEN MATCHED THEN
UPDATE SET HstryUserName = source.HstryUserName, HstryTaskName = source.HstryTaskName
WHEN NOT MATCHED THEN
INSERT VALUES (##SPID, #LoginTime, source.HstryUserName, source.HstryTaskName);
The problem is that between my check for the table existence and the MERGE statement, SQL Server may drop the temp table if all the sessions which were using it before happen to close in that exact instance (this actually happened in my tests).
Is there a best practice on how to avoid this kind of concurrency issues, that a table is not dropped between the check for its existence and its subsequent use?
The notion of "global temporary table" and "trigger" just do not click. Tables are permanent data stores, as are their attributes -- including triggers. Temporary tables are dropped when the server is re-started. Why would anyone design a system where a permanent block of code (trigger) depends on a temporary shared storage mechanism? It seems like a recipe for failure.
Instead of a global temporary table, use a real table. If you like, put a helpful prefix such as temp_ in front of the name. If the table is being shared by databases, then put it in a database where all code has access.
Create the table once and leave it there (deleting the rows is fine) so the trigger code can access it.
I'll start by saying that, on the long term, I will follow Gordon's advice, i.e. I will take the necessary steps to introduce a normal table in the database to store client application information which needs to be accessible in the triggers.
But since this was not really possible now because of time constrains (it takes weeks to get the necessary formal approvals for a new normal table), I came up with a solution for preventing SQL Server from dropping the global temp table between the check for its existence and the MERGE statement.
There is some information out there about when a global temp table is dropped by SQL Server; my personal tests showed that SQL Server drops a global temp table the moment the session which created it is closed and any other transactions started in other sessions which changed data in that table are finished.
My solution was to fake data changes on the global temp table even before I check for its existence. If the table exists at that moment, SQL Server will then know that it needs to keep it until the current transaction finishes, and it cannot be dropped anymore after the check for its existence. The code looks now like this (properly commented, since it is kind of a hack):
-- Faking a delete on the table ensures that SQL Server will keep the table until the end of the transaction
-- Since ##VTT_CONTEXT_INFO_USER_TASK may actually not exist, we need to fake the delete inside TRY .. CATCH
-- FUTURE 2016, Feb 03: A cleaner solution would use a real table instead of a global temp table.
BEGIN TRY
-- Because schema errors are checked during compile, they cannot be caught using TRY, this can be done by wrapping the query in sp_executesql
DECLARE #QueryText NVARCHAR(100) = 'DELETE ##VTT_CONTEXT_INFO_USER_TASK WHERE 0 = 1'
EXEC sp_executesql #QueryText
END TRY
BEGIN CATCH
-- nothing to do here (see comment above)
END CATCH
IF OBJECT_ID('tempdb..##VTT_CONTEXT_INFO_USER_TASK') IS NULL
CREATE TABLE ##VTT_CONTEXT_INFO_USER_TASK (
session_id smallint,
login_time datetime,
HstryUserName VDT_USERNAME,
HstryTaskName VDT_TASKNAME,
)
MERGE ##VTT_CONTEXT_INFO_USER_TASK As target
USING (SELECT ##SPID, #HstryUserName, #HstryTaskName) as source (session_id, HstryUserName, HstryTaskName)
ON (target.session_id = source.session_id)
WHEN MATCHED THEN
UPDATE SET HstryUserName = source.HstryUserName, HstryTaskName = source.HstryTaskName
WHEN NOT MATCHED THEN
INSERT VALUES (##SPID, #LoginTime, source.HstryUserName, source.HstryTaskName);
Although I would call it a "use it at your own risk" solution, it does prevent that the use of the global temp table in other sessions affects its use in the current one, which was the concern that made me start this thread.
Thanks all for your time! (from text formatting edits to replies)
In a script used for interactive analysis of subsets of data, it is often useful to store the results of queries into temporary tables for further analysis.
Many of my analysis scripts contain this structure:
CREATE TABLE #Results (
a INT NOT NULL,
b INT NOT NULL,
c INT NOT NULL
);
INSERT INTO #Results (a, b, c)
SELECT a, b, c
FROM ...
SELECT *
FROM #Results;
In SQL Server, temporary tables are connection-scoped, so the query results persist after the initial query execution. When the subset of data I want to analyze is expensive to calculate, I use this method instead of using a table variable because the subset persists across different batches of queries.
The setup part of the script is run once, and following queries (SELECT * FROM #Results is a placeholder here) are run as often as necessary.
Occasionally, I want to refresh the subset of data in the temporary table, so I run the entire script again. One way to do this would be to create a new connection by copying the script to a new query window in Management Studio, I find this difficult to manage.
Instead, my usual workaround is to precede the create statement with a conditional drop statement like this:
IF OBJECT_ID(N'tempdb.dbo.#Results', 'U') IS NOT NULL
BEGIN
DROP TABLE #Results;
END;
This statement correctly handles two situations:
On the first run when the table does not exist: do nothing.
On subsequent runs when the table does exist: drop the table.
Production scripts written by me would always use this method because it raises no errors for in the two expected situations.
Some equivalent scripts written by my fellow developers sometimes handle these two situations using exception handling:
BEGIN TRY DROP TABLE #Results END TRY BEGIN CATCH END CATCH
I believe in the database world it is better always to ask permission than seek forgiveness, so this method makes me uneasy.
The second method swallows an error while taking no action to handle non-exceptional behavior (table does not exist). Also, it is possible that an error would be raised for a reason other than that the table does not exist.
The Wise Owl warns about the same thing:
Of the two methods, the [OBJECT_ID method] is more difficult to understand but
probably better: with the [BEGIN TRY method], you run the risk of trapping
the wrong error!
But it does not explain what the practical risks are.
In practice, the BEGIN TRY method has never caused problems in systems I maintain, so I'm happy for it to stay there.
What possible dangers are there in managing temporary table existence using BEGIN TRY method? What unexpected errors are likely to be concealed by the empty catch block?
What possible dangers? What unexpected errors are likely to be concealed?
If try catch block is inside a transaction, it will cause a failure.
BEGIN
BEGIN TRANSACTION t1;
SELECT 1
BEGIN TRY DROP TABLE #Results END TRY BEGIN CATCH END CATCH
COMMIT TRANSACTION t1;
END
This batch will fail with an error like this:
Msg 3930, Level 16, State 1, Line 7
The current transaction cannot be committed and cannot support operations that write to the log file. Roll back the transaction.
Msg 3998, Level 16, State 1, Line 1
Uncommittable transaction is detected at the end of the batch. The transaction is rolled back.
Books Online documents this behavior:
Uncommittable Transactions and XACT_STATE
If an error generated in a TRY block causes the state of the current transaction to be invalidated, the transaction is classified as an uncommittable transaction. An error that ordinarily ends a transaction outside a TRY block causes a transaction to enter an uncommittable state when the error occurs inside a TRY block. An uncommittable transaction can only perform read operations or a ROLLBACK TRANSACTION. The transaction cannot execute any Transact-SQL statements that would generate a write operation or a COMMIT TRANSACTION.
now replace TRY/Catch with the Test Method
BEGIN
BEGIN TRANSACTION t1;
SELECT 1
IF OBJECT_ID(N'tempdb.dbo.#Results', 'U') IS NOT NULL
BEGIN
DROP TABLE #Results;
END;
COMMIT TRANSACTION t1;
END
and run again.Transaction will commit without any error.
A better solution may be to use a table variable rather than a temporary table
ie:
declare #results table(
a INT NOT NULL,
b INT NOT NULL,
c INT NOT NULL
);
I also think that a try block is dangerous because can hide an unexpected problem. Some programing languages can catch only selected errors and don't catch unexpected ones, if your programing language has this functionality then use it (T-SQL can't catch for an specific error)
For your scenario, I can explain that I codify exactly like you, with this try catch block.
The desirable behavior would be:
begin try
drop table #my_temp_table
end try
begin catch __table_dont_exists_error__
end catch
But this don't exists! Then you can write some think like:
begin try
drop table #my_temp_table
end try
begin catch
declare #err_n int, #err_d varchar(MAX)
SELECT
#err_n = ERROR_NUMBER() ,
#err_d = ERROR_MESSAGE() ;
IF #err_n <> 3701
raiserror( #err_d, 16, 1 )
end catch
This will raise an event when error deleting table is different that 'table don't exists'.
Notice that for your issue all this code not worth it. But can be useful for other approach. For your problem, elegant solution is drop table only if exists or use table variable.
Not in you question but possibly overlooked is the resources used by the temp table. I always drop the table at the end of the script so it does not tie up resources. What if you put a million rows in the table? Then I also test for the table at the start of the script to handle the condition there was an error in the last run and the table was not dropped. If you want to reuse the temp then at least clear out the rows.
A table variable is another option. It is lighter weight and has limitations. Avoid a table variable if you are going to use it in a query join as the query optimizer does not handle a table variable was well as it does a temp.
SQL documentation:
If more than one temporary table is created inside a single stored procedure or batch, they must have different names.
If a local temporary table is created in a stored procedure or application that can be executed at the same time by several users, the Database Engine must be able to distinguish the tables created by the different users. The Database Engine does this by internally appending a numeric suffix to each local temporary table name. The full name of a temporary table as stored in the sysobjects table in tempdb is made up of the table name specified in the CREATE TABLE statement and the system-generated numeric suffix. To allow for the suffix, table_name specified for a local temporary name cannot exceed 116 characters.
Temporary tables are automatically dropped when they go out of scope, unless explicitly dropped by using DROP TABLE:
A local temporary table created in a stored procedure is dropped automatically when the stored procedure is finished. The table can be referenced by any nested stored procedures executed by the stored procedure that created the table. The table cannot be referenced by the process that called the stored procedure that created the table.
All other local temporary tables are dropped automatically at the end of the current session.
Global temporary tables are automatically dropped when the session that created the table ends and all other tasks have stopped referencing them. The association between a task and a table is maintained only for the life of a single Transact-SQL statement. This means that a global temporary table is dropped at the completion of the last Transact-SQL statement that was actively referencing the table when the creating session ended.
Me and another developer are discussing which type of table would be more appropriate for our task. It's basically going to be a cache that we're going to truncate at the end of the day. Personally, I don't see any reason to use anything other than a normal table for this, but he wants to use a global temp table.
Are there any advantages to one or the other?
Use a normal table in tempdb if this is just transient data that you can afford to lose on service restart or a user database if the data is not that transient.
tempdb is slightly more efficient in terms of logging requirements.
Global temp tables get dropped once all referencing connections are the connection that created the table is closed.
Edit: Following #cyberkiwi's edit. BOL does definitely explicitly say
Global temporary tables are visible to
any user and any connection after they
are created, and are deleted when all
users that are referencing the table
disconnect from the instance of SQL
Server.
In my test I wasn't able to get this behaviour though either.
Connection 1
CREATE TABLE ##T (i int)
INSERT INTO ##T values (1)
SET CONTEXT_INFO 0x01
Connection 2
INSERT INTO ##T VALUES(4)
WAITFOR DELAY '00:01'
INSERT INTO ##T VALUES(5)
Connection 3
SELECT OBJECT_ID('tempdb..##T')
declare #killspid varchar(10) = (select 'kill ' + cast(spid as varchar(5)) from sysprocesses where context_info=0x01)
exec (#killspid)
SELECT OBJECT_ID('tempdb..##T') /*NULL - But 2 is still
running let alone disconnected!*/
Global temp table
-ve: As soon as the connection that created the table goes out of scope, it takes
the table with it. This is damaging if you use connection pooling which can swap connections constantly and possibly reset it
-ve: You need to keep checking to see if the table already exists (after restart) and create it if not
+ve: Simple logging in tempdb reduces I/O and CPU activity
Normal table
+ve: Normal logging keeps your cache with your main db. If your "cache" is maintained but is still mission critical, this keeps it consistent together with the db
-ve: follow from above More logging
+ve: The table is always around, and for all connections
If the cache is a something like a quick lookup summary for business/critical data, even if it is reset/truncated at the end of the day, I would prefer to keep it a normal table in the db proper.
I'm trying to write to a log file inside a transaction so that the log survives even if the transaction is rolled back.
--start code
begin tran
insert [something] into dbo.logtable
[[main code here]]
rollback
commit
-- end code
You could say just do the log before the transaction starts but that is not as easy because the transaction starts before this S-Proc is run (i.e. the code is part of a bigger transaction)
So, in short, is there a way to write a special statement inside a transaction that is not part of the transaction. I hope my question makes sense.
Use a table variable (#temp) to hold the log info. Table variables survive a transaction rollback.
See this article.
I do this one of two ways, depending on my needs at the time. Both involve using a variable, which retain their value following a rollback.
1) Create a DECLARE #Log varchar(max) value and use this: #SET #Log=ISNULL(#Log+'; ','')+'Your new log info here'. Keep appending to this as you go through the transaction. I'll insert this into the log after the commit or the rollback as necessary. I'll usually only insert the #Log value into the real log table when there is an error (in theCATCH` block) or If I'm trying to debug a problem.
2) create a DECLARE #LogTable table (RowID int identity(1,1) primary key, RowValue varchar(5000). I insert into this as you progress through your transaction. I like using the OUTPUT clause to insert the actual IDs (and other columns with messages, like 'DELETE item 1234') of rows used in the transaction into this table with. I will insert this table into the actual log table after the commit or the rollback as necessary.
If the parent transaction rolls back the logging data will roll back as well - SQL server does not support proper nested transactions. One possibility is to use a CLR stored procedure to do the logging. This can open its own connection to the database outside the transaction and enter and commit the log data.
Log output to a table, use a time delay, and use WITH(NOLOCK) to see it.
It looks like #arvid wanted to debug the operation of the stored procedure, and is able to alter the stored proc.
The c# code starts a transaction, then calls a s-proc, and at the end it commits or rolls back the transaction. I only have easy access to the s-proc
I had a similar situation. So I modified the stored procedure to log my desired output to a table. Then I put a time delay at the end of the stored procedure
WAITFOR DELAY '00:00:12'; -- 12 second delay, adjust as desired
and in another SSMS window, quickly read the table with READ UNCOMMITTED isolation level (the "WITH(NOLOCK)" below
SELECT * FROM dbo.NicksLogTable WITH(NOLOCK);
It's not the solution you want if you need a permanent record of the logs (edit: including where transactions get rolled back), but it suits my purpose to be able to debug the code in a temporary fashion, especially when linked servers, xp_cmdshell, and creating file tables are all disabled :-(
Apologies for bumping a 12-year old thread, but Microsoft deserves an equal caning for not implementing nested transactions or autonomous transactions in that time period.
If you want to emulate nested transaction behaviour you can use named transactions:
begin transaction a
create table #a (i int)
select * from #a
save transaction b
create table #b (i int)
select * from #a
select * from #b
rollback transaction b
select * from #a
rollback transaction a
In SQL Server if you want a ‘sub-transaction’ you should use save transaction xxxx which works like an oracle checkpoint.