DROP TABLE fails for temp table - sql-server

I have a client application that creates a temp table, the performs a bulk insert into the temp table, then executes some SQL using the table before deleting it.
Pseudo-code:
open connection
begin transaction
CREATE TABLE #Temp ([Id] int NOT NULL)
bulk insert 500 rows into #Temp
UPDATE [OtherTable] SET [Status]=0 WHERE [Id] IN (SELECT [Id] FROM #Temp) AND [Group]=1
DELETE FROM #Temp WHERE [Id] IN (SELECT [Id] FROM [OtherTable] WHERE [Group]=1)
INSERT INTO [OtherTable] ([Group], [Id]) SELECT 1 as [Group], [DocIden] FROM #Temp
DROP TABLE #Temp
COMMIT TRANSACTION
CLOSE CONNECTION
This is failing with an error on the DROP statement:
Cannot drop the table '#Temp', because it does not exist or you do not have permission.
I can't imagine how this failure could occur without something else going on first, but I don't see any other failures occurring before this.
Is there anything that I'm missing that could be causing this to happen?

possibly something is happening in the session in between?
Try checking for the existence of the table before it's dropped:
IF object_id('tempdb..#Temp') is not null
BEGIN
DROP TABLE #Temp
END

I've tested this on SQL Server 2005, and you can drop a temporary table in the transaction that created it:
begin transaction
create table #temp (id int)
drop table #temp
commit transaction
Which version of SQL Server are you using?
You might reconsider why you are dropping the temp table at all. A local temporary table is automatically deleted when the connection ends. There's usually no need to drop it explicitly.
A global temporary table starts with a double hash (f.e. ##MyTable.) But even a global temp table is automatically deleted when no connection refers to it.

I think you aren't creating the table at all, because the statement
CREATE TABLE #Temp ([Id] AS int)
is incorrect. Please, write it as
CREATE TABLE #Temp ([Id] int)
and see if it works.

BEGIN TRAN
IF object_id('DATABASE_NAME..#TABLE_NAME') is not null
BEGIN
DROP TABLE #TABLE_NAME
END
COMMIT TRAN
Note:Please enter your table name where TABLE_NAME and database name where it says DATABASE_NAME

Related

How to migrate Azure SQL Database to another Azure SQL Database

I have one SQLServer in Azure portal and in that server has 2 SQL Databases TestDB1 and TestDB2 is copy of TestDB1. But we used TestDB2 for testing and now it has more data compared to TestDB1. I want to migrate only unavailable data from TestDB2 to TestDB1 as both are having same DB schema. How to do it?
Something like this might work, I've tested it locally and it does merge in test data from a sperate database, full example shown
USE [Playground2] -- Swap for your database name
CREATE TABLE MyTable (
Id BIGINT NOT NULL IDENTITY(1,1) CONSTRAINT PK_MyTable PRIMARY KEY,
[Name] NVARCHAR(50),
[Age] INT,
)
INSERT INTO MyTable([Name], [Age])
VALUES('Andrew', '28'),
('Robert', '38'),
('James', '40'),
('Robin', '40'),
('Peter', '56'), -- second database has this extra data
('Steve', '22') -- second database has this extra data
GO
USE [Playground] -- Swap for your database name
CREATE TABLE MyTable (
Id BIGINT NOT NULL IDENTITY(1,1) CONSTRAINT PK_MyTable PRIMARY KEY,
[Name] NVARCHAR(50),
[Age] INT,
)
INSERT INTO MyTable([Name], [Age])
VALUES('Andrew', '28'),
('Robert', '38'),
('James', '40'),
('Robin', '40')
GO
-- Check that the tables have slightly different data
SELECT * FROM Playground.dbo.MyTable
SELECT * FROM Playground2.dbo.MyTable
BEGIN TRANSACTION
BEGIN TRY
SET IDENTITY_INSERT dbo.MyTable ON
MERGE INTO dbo.MyTable AS TGT
USING [Playground2].dbo.MyTable AS SRC -- Note that we point to the other database here seeing as it is on the same SQL instance
ON TGT.Id = SRC.Id
WHEN MATCHED THEN
UPDATE SET
TGT.[Name] = SRC.[Name],
TGT.[Age] = SRC.[Age]
WHEN NOT MATCHED THEN
INSERT(Id, [Name], [Age])
VALUES(SRC.Id, SRC.[Name], SRC.[Age])
OUTPUT $action AS [Action],
deleted.[Name] AS OldName,
inserted.[Name] AS [NewName],
deleted.[Age] AS OldCountry,
inserted.[Age] AS NewCountry;
SET IDENTITY_INSERT dbo.MyTable OFF
SELECT * FROM dbo.MyTable
ROLLBACK TRANSACTION -- Change to COMMIT TRANSACTION when you are happy with the results
END TRY
BEGIN CATCH
PRINT 'Rolling back changes, there was an error!!'
ROLLBACK TRANSACTION
DECLARE #Msg NVARCHAR(MAX)
SELECT #Msg=ERROR_MESSAGE()
RAISERROR('Error Occured: %s', 20, 101,#msg) WITH LOG
END CATCH
But there also will be tools to do this, but this could be one answer, cheers

How to create a roll back of a table that you updated in sql

I have an sql query that updates a table based on a condition.I am creating a migration file in visual studio, how do i go about adding a roll back to ensure that the changes i updated into a file goes back to how it was.
INSERT INTO Table(ID,Name,SiteID,Surname)
SELECT
(SELECT MAX(ID) FROM Table) + ROW_NUMBER()OVER (ORDER BY ID),
Name,
10100,
Surname,
FROM Table
WHERE SiteID = 10000 --so it will copy this data 10000 and make a new entry of 10100
Can you advise on how to create a rollback so that it will delete all the 10100 entries and go back to been how it was
can I just say ?
delete
from table
where siteID=10100
Is this efficient? for a roll back
A DELETE statement, is just that, a DELETE statement. Rolling back something means undoing what has so far been done in the uncommitted transaction. This may not be "deleting", it might be undoing an UPDATE, or returning a row that was previously deleted, or even DDL changes.
In your case, if you want to remove the row you inserted earlier, then a DELETE statement is what you're after. That's not rolling back though. Here's an example of a ROLLBACK (and COMMIT):
--BEGIN a Transaction
BEGIN TRANSACTION Creation;
--Create a table
CREATE TABLE #Sample (ID int IDENTITY(1,1), String varchar(10));
-- insert a row
INSERT INTO #Sample (String)
VALUES ('Hello');
--Rollback the transactions
ROLLBACK TRANSACTION Creation;
--Now, not only has the row never been inserted, the table was not created!
--This will error
SELECT *
FROM #Sample;
GO
--Now, let's create and COMMIT that table this time:
BEGIN TRANSACTION Creation2;
--Create a table
CREATE TABLE #Sample (ID int IDENTITY(1,1), String varchar(10));
-- insert a row
INSERT INTO #Sample (String)
VALUES ('Hello');
--And commit
COMMIT TRANSACTION Creation2;
GO
--Hazaar! data
SELECT *
FROM #Sample;
GO
--And finally, a little play around with some data
BEGIN TRANSACTION Data1;
INSERT INTO #Sample (String)
VALUES ('These'),('are'),('more'),('values');
--Let's Delete the Hello as well
DELETE
FROM #Sample
WHERE ID = 1;
--Inspect mid transction
SELECT *
FROM #Sample;
--Rollback!
ROLLBACK TRANSACTION Data1;
--Oh, the values have gone!
SELECT *
FROM #Sample;
--Notice, however, the ID still increments:
INSERT INTO #Sample (String)
VALUES ('Goodbye');
--Goodbye is ID 6
SELECT *
FROM #Sample;
GO
DROP TABLE #Sample;
Hope that helps explain what a ROLLBACK is for you in SQL Server terms.

Truncate existing table within a stored procedure then insert fresh data

I have a stored procedure that returns a result set. After that I insert this result set into created real table. And then I am using that real table create SSRS reports.
So, something like this:
CREATE PROCEDURE Test
AS
DECLARE #TempTable TABLE(..)
INSERT INTO #TempTable
SELECT...
FROM ...
WHERE ...
SELECT * FROM #TempTable
--============================
INSERT INTO RealTable EXEC [dbo].[Test]
How can I modify this stored procedure so every time it executed it will truncate table with existing data and then insert a fresh one?
So I need something like that:
create procedure Test
as
TRUNCATE RealTable
DECLARE #TempTable TABLE(..)
INSERT INTO #TempTable
SELECT...
FROM...
WHERE...
SELECT * FROM #TempTable INTO RealTable
Or should I just create agent job that would run command something like:
Truncate Table RealTable
INSERT INTO RealTable EXEC [dbo].[Test]
Am I on a right way in terms of logic?
Dont TRUNCATE. Use a MERGE Statement.
CREATE PROCEDURE Test
AS
MERGE RealTable TRGT
USING SourceTable SRCE
ON SRCE.[Column] = TRGT.Column --use columns that can be joined together
WHEN MATCHED THEN UPDATE
SET TRGT.Column1 = SRCE.Column1,
TRGT.Column2 = SRCE.Column2
....................
WHEN NOT MATCHED BY TARGET THEN INSERT
VALUES
(
SRCE.Column1,
SRCE.Column2,
.....................
)
WHEN NOT MATCHED BY SOURCE THEN
DELETE;
What's the purpose of the truncate if you are inserting the same data?
What should happen if you have more then 1 concurrent user?
another thing you can do:
1.
insert into TargetTable
select * from SourceTable
2.
rebuild indexes on TargetTable
3.
exec sp_rename SourceTable, SourceTable_Old
exec sp_rename TargetTable, SourceTable
drop table SourceTable_Old
this is an old way of entire table data refresh without much impact, when table variable was not an option.
this is what you probably need as you are directly inserting from #TempTable to RealTable.
create procedure Test
as
BEGIN
TRUNCATE TABLE RealTable
INSERT INTO RealTable
SELECT...
FROM someothertable
WHERE...
END

Continue inserting data in tables skipping duplicate data issue

set xact_abort off;
begin tran
DECLARE #error int
declare #SQL nvarchar(max)
set #SQL=N'';
select #SQL=some select query to fetch insert scripts
begin try
exec sp_executesql #SQL
commit
end try
begin catch
select #error=##Error
if #error=2627
begin
continue inserting data
end
if #error<>2627
begin
rollback
end
end catch
I am unable to continue inserting data when any duplicate data comes. Is there any alternative way to continue running SQL queries irrespective of duplicate data? I don not want to alter the index or table.
I am unable to continue inserting data when any duplicate data comes. Is there any alternative way to continue running sql queries irrespective of duplicate data. I dont want to alter the index or table.
What you can do is change the insert scripts as you call them, in this pseudo statement:
select #SQL=some select query to fetch insert scripts
Change the generation script: instead of generating INSERT INTO ... VALUES(...) statements, generate IF NOT EXISTS(...) INSERT INTO ... VALUES(...) statements
These insert statements should first check if a key already exists in the table. If your insert statements are of the form
INSERT INTO some_table(keycol1,...,keycolN,datacol1,...,datacolM)VALUES(keyval1,...,keyvalN,dataval1,...,datavalM);
You can rewrite those as:
IF NOT EXISTS(SELECT 1 FROM some_table WHERE keycol1=keyval1 AND ... AND keycolN=keyvalN)
INSERT INTO some_table(keycol1,...,keycolN,datacol1,...,datacolM)VALUES(keyval1,...,keyvalN,dataval1,...,datavalM);
Change the generation script: instead of generating INSERT INTO ... SELECT ..., generate INSERT INTO ... SELECT ... WHERE NOT EXISTS(...) statements
You can change these statements to only insert if the key does not exist in the table yet. Suppose your insert statements are of the form:
INSERT INTO some_table(keycol1,...,keycolN,datacol1,...,datacolN)
SELECT _keycol1,...,_keycolN,datacol1,...,datacolN
FROM <from_clause>;
You can rewrite those as:
INSERT INTO some_table(keycol1,...,keycolN,datacol1,...,datacolN)
SELECT _keycol1,...,_keycolN,datacol1,...,datacolN
FROM <from_clause>
WHERE NOT EXISTS(SELECT 1 FROM some_table WHERE keycol1=_keycol1 AND ... AND keycolN=_keycolN);
Replace the target table name in #SQL with a temporary table (a so-called staging table), then insert from the temporary table to the target table using WHERE NOT EXISTS(...)
This way you would not have to change the insert generation script. First create a temporary table that has the exact same structure as the target table (not including the primary key). Then replace all instances of the target table name in #SQL with the name of the temporary table. Run the #SQL and afterwards insert from the temporary table to the target table using a WHERE NOT EXISTS(...).
Suppose the target table is named some_table, with key columns key_col1,...,key_colN and data columns datacol1, ..., datacolM.
SELECT * INTO #staging_table FROM some_table WHERE 1=0; -- create staging table with same columns as some_table
SET #SQL=REPLACE(#SQL,'some_table','#staging_table');
EXEC sp_executesql #SQL;
INSERT INTO some_table(keycol1,...,keycolN,datacol1,...,datacolN)
SELECT st.keycol1,...,st.keycolN,st.datacol1,...,st.datacolN
FROM #staging_table AS st
WHERE NOT EXISTS(SELECT 1 FROM some_table WHERE keycol1=st.keycol1 AND ... AND keycolN=st.keycolN);
DROP TABLE #staging_table;

How to refactor this deadlock issue?

I ran into a deadlock issue synchronizing a table multiple times in a short period of time. By synchronize I mean doing the following:
Insert data to be synchronized into a temp table
Update existing records in destination table
Insert new records into the destination table
Delete records that are not in the synch table under certain
circumstances
Drop temp table
For the INSERT and DELETE statements, I'm using a LEFT JOIN similar to:
INSERT INTO destination_table (fk1, fk2, val1)
FROM #tmp
LEFT JOIN destination_table dt ON dt.fk1 = #tmp.fk1
AND dt.fk2 = #temp.fk2
WHERE dt.pk IS NULL;
The deadlock graph is reporting the destination_table's primary key is under an exclusive lock. I assume the above query is causing a table or page lock instead of a row lock. How would I confirm that?
I could rewrite the above query with an IN, EXIST or EXCEPT command. Are there any additional ways of refactoring the code? Will refactoring using any of these commands avoid the deadlock issue? Which one would be the best? I'm assuming EXCEPT.
Well under normal circumstances I could execute scenario pretty well. Given below is the test script I created. Are you trying something else?
drop table #destination_table
drop table #tmp
Declare #x int=0
create table #tmp(fk1 int, fk2 int, val int)
set #x=2
while (#x<1000)
begin
insert into #tmp
select #x,#x,100
set #x=#x+3
end
create table #destination_table(fk1 int, fk2 int, val int)
while (#x<1000)
begin
insert into #destination_table
select #x,#x,100
set #x=#x+1
end
INSERT INTO #destination_table (fk1, fk2, val)
select t.*
FROM #tmp t
LEFT JOIN #destination_table dt ON dt.fk1 = t.fk1
AND dt.fk2 = t.fk2
WHERE dt.fk1 IS NULL

Resources