SQL Server - Create temp table if doesn't exist - sql-server

In my SQL Server 2012 environment, I've created a series of stored procedures that pass pre-existing temporary tables among themselves (I have tried different architectures here, but wasn't able to bypass this due to the nature of the requirements / procedures).
What I'm trying to do is to, within a stored procedure check if a temporary table has already been created and, if not, to create it.
My current SQL looks as follows:
IF OBJECT_ID('tempdb..#MyTable') IS NULL
CREATE TABLE #MyTable
(
Col1 INT,
Col2 VARCHAR(10)
...
);
But when I try and run it when the table already exists, I get the error message
There is already an object named '#MyTable' in the database
So it seems it doesn't simply ignore those lines within the If statement.
Is there a way to accomplish this - create a temp table if it doesn't already exist, otherwise, use the one already in memory?
Thanks!
UPDATE:
For whatever reason, following #RaduGheorghiu's suggestion from the comments, I found out that the system creates a temporary table with a name along the lines of dbo.#MyTable________________________________________________0000000001B1
Is that why I can't find it? Is there any way to change that? This is new to me....

Following the link here, http://weblogs.sqlteam.com/mladenp/archive/2008/08/21/SQL-Server-2005-temporary-tables-bug-feature-or-expected-behavior.aspx
It seems as though you need to use the GO statement.

You meant to use IS NOT NULL i think... this is commonly used to clear temp tables so you don't get the error you mentioned in your OP.
IF OBJECT_ID('tempdb..#MyTable') IS NOT NULL DROP TABLE #MyTable
CREATE TABLE #MyTable
(
Col1 INT,
Col2 VARCHAR(10)
);
The big difference is the DROP TABLE statement after you do your logical check. Also, creating your table without filling data doesn't make it NULL
DROP TABLE #MyTable
CREATE TABLE #MyTable
(
Col1 INT,
Col2 VARCHAR(10)
);
IF OBJECT_ID('tempdb..#MyTable') IS NOT NULL
SELECT 1

Try wrapping your actions in a begin...end block:
if object_id('tempdb..#MyTable') is null
begin
create table #MyTable (
Col1 int
, Col2 varchar(10)
);
end

This seems odd, but it works when I try it
IF(OBJECT_ID('tempdb..#Test') IS NULL) --check if it exists
BEGIN
IF(1 = 0)--this will never actually run, but it tricks the parser into allowing the CREATE to run
DROP TABLE #Test;
PRINT 'Create table';
CREATE TABLE #Test
(
ID INT NOT NULL PRIMARY KEY
);
END
IF(NOT EXISTS(SELECT 1 FROM #Test))
INSERT INTO #Test(ID)
VALUES(1);
SELECT *
FROM #Test;
--Try dropping the table and test again
--DROP TABLE #Test;

Related

INSERTED table gives an error when using a view with an INSTEAD OF INSERT trigger

I am trying to run the following merge statement to insert a row:
MERGE sales.Widget
USING (
VALUES ('19668651', 4.75))
AS widg (WidgetId, WidgetCost)
ON 1=0
WHEN NOT MATCHED THEN
INSERT (WidgetId, WidgetCost)
VALUES (widg.WidgetId, widg.WidgetCost)
OUTPUT INSERTED.WidgetId
INTO #inserted;
GO
I am confused by the error I am getting:
The column reference "inserted.WidgetId" is not allowed because it refers to a base table that is not being modified in this statement.
I thought that the inserted table was just an in-memory table of the values being passed in to the merge statement.
Why then would it care if I am modifying a "base" table as long as the value was passed in?
I can clearly tell that this is related to the fact that I have a view with an INSTEAD OF INSERT trigger on it (because it works fine against a normal table).
But why does SQL Server not just return the value that was passed in? (WidgetId in this case.)
Here is the script to reproduce the error:
CREATE SCHEMA sales
GO
-- Create the base table
CREATE TABLE sales.Widget_OLD(
WIDGET_ID int NOT NULL,
WIDGET_COST money NOT NULL
CONSTRAINT PK_Widget PRIMARY KEY CLUSTERED (WIDGET_ID ASC)
)
GO
-- Create the overlay view
CREATE VIEW sales.Widget AS
SELECT widg.WIDGET_ID AS WidgetId, widg.WIDGET_COST AS WidgetCost
FROM sales.Widget_OLD widg
GO
-- create the instead of insert trigger
CREATE TRIGGER sales.InsertWidget ON sales.Widget
INSTEAD OF INSERT AS
BEGIN
INSERT INTO sales.Widget_OLD (WIDGET_ID, WIDGET_COST)
SELECT Inserted.WidgetId, inserted.WidgetCost
FROM Inserted
END
GO
DECLARE #inserted TABLE (WidgetId varchar(11) NOT null);
MERGE sales.Widget
USING (
VALUES ('19668651', 4.75))
AS widg (WidgetId, WidgetCost)
ON 1=0
WHEN NOT MATCHED THEN
INSERT (WidgetId, WidgetCost)
VALUES (widg.WidgetId, widg.WidgetCost)
OUTPUT INSERTED.WidgetId
INTO #inserted;
GO
-- Clean up
DROP TRIGGER sales.InsertWidget
DROP VIEW sales.Widget
DROP TABLE sales.Widget_OLD
DROP SCHEMA sales
go
NOTE: This is from my Entity Framework Core application when I try to do 3+ inserts (see this question for more details) That question is about how to stop EF Core from using MERGE. This one is to understand what is happening.

Creating temp tables in sybase

I am running into an issue with creating temp tables in Sybase db. We have a sql where we create a temp table, insert/update it and do a select * from it at the end of get some results. We are invoking this sql from the service layer using spring jdbc tmplate. The first run works fine, but the next subsequesnt runs fails with error
cannot create temporary table <name>. Prefix name is already in use by another temorary table
This is how I am checking if table exists:
if object_id('#temp_table') is not null
drop table #temp_table
create table #temp_table(
...
)
Anything I am missing here?
Might not be a great response, but I also have that problem and I have 2 ways around it.
1. Do the IF OBJECT_ID Drop Table as a separate execute prior to the query
2. Do the Drop Table without the IF OBJECT_ID() right after your query.
You are really close but temp tables require using the db name before too.
IF OBJECT_ID('tempdb..#Results') IS NOT NULL
DROP TABLE #Results
GO
It would be the same if you were checking if a user table in another database existed.
IF OBJECT_ID('myDatabase..myTable') IS NOT NULL
DROP TABLE myDatabase..myTable
GO
NOTE: A bit more info on BigDaddyO's first suggestion ...
The code snippet you've provided, when submitted as a SQL batch, is parsed as a single unit of work prior to the execution. Net result is that if #temp_table already exists when the batch is submitted, then the compilation of the create table command will generate the error. This behavior can be seen in the following example:
create table #mytab (a int, b varchar(30), c datetime)
go
-- your code snippet; during compilation the 'create table' generates the error
-- because ... at the time of compilation #mytab already exists:
if object_id('#mytab') is not NULL
drop table #mytab
create table #mytab (a int, b varchar(30), c datetime)
go
Msg 12822, Level 16, State 1:
Server 'ASE200', Line 3:
Cannot create temporary table '#mytab'. Prefix name '#mytab' is already in use by another temporary table '#mytab'.
-- same issue occurs if we pull the 'create table' into its own batch:
create table #mytab (a int, b varchar(30), c datetime)
go
Msg 12822, Level 16, State 1:
Server 'ASE200', Line 1:
Cannot create temporary table '#mytab'. Prefix name '#mytab' is already in use by another temporary table '#mytab'.
As BigDaddyO has suggested, one way to get around this is to break your code snippet into two separate batches, eg:
-- test/drop the table in one batch:
if object_id('#mytab') is not NULL
drop table #mytab
go
-- create the table in a new batch; during compilation we don't get an error
-- because #mytab does not exist at this point:
create table #mytab (a int, b varchar(30), c datetime)
go

Temp tables: CREATE vs. SELECT INTO

I've searched and found this article about temporary tables in SQL Server because I've met a line in one of our stored procedures saying:
SELECT Value SomeId INTO #SomeTable FROM [dbo].[SplitIds](#SomeIds, ';')
I know that #SomeTable is stored in tempdb as a temporary table. However, I don't understand why we don't have to use CREATE TABLE #SomeTable first as it is written in the mentioned article. Our code is working fine, I just don't get why it is enough to use SELECT ... INTO #SomeTable. What would be the consequence when I add CREATE TABLE #SomeTable at the beginning? Would we get any differences in performance? Would the table be stored at another location?
Select ... into [table] uses the properties of the dataset generated from the Select statement to create a temporary table and subsequently fill the table.
The alternative to using Select ... into [table] is to use a Create Table statement followed by an Insert Into statement. Explicitly creating the table offers more control and precision.
Using a Select ... into [Table] may seem like a no-brainer, but there are situations where Select ... into [Table] can be problematic.
For instance, when you are going to create a temporary table and insert additional rows at a later time, using the Select ... into [Table] syntax can cause problems, especially with string-based and nullable fields.
As an example of the limitations of the Select ... into [table], the script below creates a temporary table with two fields, First_Name and Last_Name. Next, an Insert statement attempts to add another record to the temporary table, but fails as the values would be truncated.
Select 'Bob' as First_Name
, 'Smith' as Last_Name
Into #tempTable;
Insert into #tempTable (First_Name, Last_Name)
Select 'Christopher' as First_Name
, 'Brown' as Last_Name;
The script fails because the Select ... into [table] statement creates a table equivalent to the following script:
Create Table #tempTable (
First_Name varchar(3) Not Null
Last_Name varchar(5) Not Null
);

What is the preferred method of creating, using and dropping temp tables in sql server?

When using temp tables in SQL Server stored procs, is the preferred practice to;
1) Create the temp table, populate it, use it then drop it
CREATE TABLE #MyTable ( ... )
-- Do stuff
DROP TABLE #MyTable
2) Check if it exists, drop it if it does, then create and use it
IF object_id('tempdb..#MyTable') IS NOT NULL
DROP TABLE #MyTable
CREATE TABLE #MyTable ( ... )
3) Create it and let SQL Server clean it up when it goes out of scope
CREATE TABLE #MyTable ( ... )
-- Do Stuff
I read in this answer and its associated comments, that this can be useful in situations where the temp table is reused that SQL Server will truncate the table but keep the structure to save time.
My stored proc is likely to be called pretty frequently, but it only contains a few columns, so I don't know how advantageous this really is in my situation.
You could test and see if one method outperforms another in your scenario. I've heard about this reuse benefit but I haven't performed any extensive tests myself. (My gut instinct is to explicitly drop any #temp objects I've created.)
In a single stored procedure you should never have to check if the table exists - unless it is also possible that the procedure is being called from another procedure that might have created a table with the same name. This is why it is good practice to name #temp tables meaningfully instead of using #t, #x, #y etc.
I follow this approach:
IF object_id('tempdb..#MyTable') IS NOT NULL
DROP TABLE #MyTable
CREATE TABLE #MyTable ( ... )
// Do Stuff
IF object_id('tempdb..#MyTable') IS NOT NULL
DROP TABLE #MyTable
Reason: In case if some error occurs in sproc, and created temp table is not dropped and when the same sproc is called with check for existence, it will raise error that table cannot be created, and will never get successfully executed unless the table is dropped. So always perform check for the existence of and object before creating it.
When using temp tables my preferred practice is actually a combination of 1 and 2.
IF object_id('tempdb..#MyTable') IS NOT NULL
DROP TABLE #MyTable
CREATE TABLE #MyTable ( ... )
// Do Stuff
IF object_id('tempdb..#MyTable') IS NOT NULL
DROP TABLE #MyTable

Function columns cached

It looks like select * in a UDF is dangerous. Consider this script:
create table TestTable (col1 int, col2 varchar(1))
insert into TestTable values (123, 'a')
go
create function TestFunction
(
#param1 bit
)
returns table
as
return
(
select * from TestTable
)
go
select * from TestFunction(0)
alter table TestTable
add col3 varchar(1)
select * from TestFunction(0)
drop function TestFunction
drop table TestTable
go
You will get two result sets, both with the same number of columns, even though I added col3. If the table is recreated an an extra column is inserted in the middle, everything will shift one column over, showing the data under the wrong column name. In other words, the columns will stay the same, but the data has an extra column.
I wasn't able to find any information about this, but it seems to me the only way to avoid this is to always specify your columns in a function.
So my question is, what exactly does a UDF cache? It seems output columns are--anything else? Also, any way to still use select * but prevent this problem? Thanks.
Add exec sp_refreshsqlmodule 'TestFunction' before the second call.
The function's metadata does not automatically update. Run an ALTER statement.

Resources