Temporary table not created from dynamic query execution - sql-server

If I run this dynamic query:
declare #test nvarchar(1000) = 'select * into #tmp7 from bauser'
execute(#test)
and then try to query #tmp7 with:
select * from #tmp7
error is thrown:
Invalid object name '#tmp7'.
However if I run the same query manually:
select * into #tmp7 from bauser
Everything is OK. Temporary table is created and filled with results.
Why is it not working with dynamic query execution?

SCOPE!
The temporary table exists only in the scope of the dynamic executed query
If you do want to make the select put it inside the dynamic query
declare #test nvarchar(1000) = 'select * into #tmp7 from bauser
select * from #tmp7'
execute(#test)
Also you can check if a such object exists by using this
select * from sys.sysobjects so where so.name like '%tmp7%'
See this similar question
SQL Server 2005 and temporary table scope
Edit
A temp table IS A TABLE so yes you can add columns, indexes, etc. Those tables resides in fact in the TempDB database and you can even "find" them (they can be seen with strange long names) but they are destroyed after the execution of your EXEC.
Maybe your problem is to try the dynamic approach or is not related to your question at all. Try to post a new question what you got and what you need to do to get further assistance.

If you create temp table using dynamic SQL, it will not be available out of dynamic SQL scope.
You need to create it out of dynamic SQL and then use INSERT INTO to populate the table.
-- use this trick to create the temp table easily.
SELECT * INTO #tmp7
FROM bauser
WHERE 1=2
declare #test nvarchar(1000) = 'insert into #tmp7 select * from bauser'
execute(#test)

Related

Passing Table Name as parameter in SQL Server 2012

I created a SQL Server procedure where I am passing table names as parameters.
#Input_Table_Name nvarchar(200),
#Output_Table_Name nvarchar(200)
This is working fine when I use select * from #Input_Table_Name, however it's not working when I use select * into #Output_Table_Name or Update #Output_Table_Name or Alter table #Output_Table_Name.
I understand creating a dynamic query instead of passing table name as parameter is a good practice, however in this case, I just want to work with passing table name as parameter.
Please let me know how we can make this work with Select * into, Alter Table, or Update SQL statements.
As mentioned in comments the only way to make it pure T-SQL is to use dynamic SQL. But if you enable SQLCMD mode in SSMS you could write something like:
:setvar TableName "myTableName"
GO
SELECT * FROM $(TableName);
--it will be executed as: SELECT * FROM myTableName

MSSQL Stored Procedure Creating A Temp Table Dynamically

We're trying to write some automated reports to execute SQL statements we have stored in a table. The table data is normally used in a stored procedure called by the triggers and uses data passed in via temp tables (created in the trigger statements), and has a table name, then an SQL statement that works on #TempInserted and #TempDeleted, which correspond to the Inserted and Deleted objects from the trigger and then some e-mail columns that determine where to send the output.
This all works fine from the trigger statements, as each creates each temp table once, during execution:-
SELECT * INTO #TempInserted FROM INSERTED
SELECT * INTO #TempDeleted FROM DELETED
Then the trigger calls the TriggerHandler stored procedure, passing the table name through as a pararmeter.
..
However, when I try to create these dynamically from a general stored procedure in order to fire off these statements as reports (so we don't duplicate the statements), in a batch, I'm hitting a problem:-
SELECT * INTO #TempInserted FROM ...
works fine from a defined table, or object (e.g. "FROM INSERTED"), but I've found that it can't get it's schema from a dynamic query.
For example, I can do
SELECT TOP 1 * INTO #Test FROM TableA
SELECT * FROM #Test
DROP TABLE #Test
But I can't then do
EXECUTE sp_executesql N'SELECT TOP 1 * INTO #Test FROM TableA'
SELECT * FROM #Test
DROP TABLE #Test
because then #Test is local to the EXECUTE context, and not its parent.
I can, however, do the insert in the EXECUTE (or a stored procedure) because the temp table is in scope, if I've already created the table schema:-
SELECT * INTO #Test FROM TableA WHERE 1 = 2 -- create an empty schema
EXECUTE sp_executesql N'INSERT INTO #Test SELECT TOP 10 * FROM TableA'
SELECT * FROM #Test
DROP TABLE #Test
So, that's OK, but my problem comes when I want to dynamically create that schema, depending on the table name were running the reports for. The INSERT works:-
SELECT * INTO #Test FROM TableA WHERE 1 = 2 -- create an empty schema
DECLARE #Table NVARCHAR(20) = 'TableA'
DECLARE #SQL NVARCHAR(200) = N'INSERT INTO #Test SELECT TOP 10 * FROM ' + #Table
EXECUTE sp_executesql #SQL
SELECT * FROM #Test
DROP TABLE #Test
But only if the temp table already has a schema. If I try to conditionally create the schema, depending on the table selected, I get a parsing error:-
DECLARE #Table NVARCHAR(20) = 'TableA'
IF #Table = 'TableA'
SELECT * INTO #Test FROM TableA WHERE 1 = 2 -- create an empty schema
IF #Table = 'TableB'
SELECT * INTO #Test FROM TableB WHERE 1 = 2 -- create an empty schema
DECLARE #SQL NVARCHAR(200) = N'INSERT INTO #Test SELECT TOP 10 * FROM ' + #Table
EXECUTE sp_executesql #SQL
SELECT * FROM #Test
DROP TABLE #Test
gives "There is already an object named '#Test' in the database." - so the query parser isn't following the structure of the query, which only actually creates the temp table once. This also holds true if you do
SELECT * INTO #Test FROM ....
DROP TABLE #Test
SELECT * INTO #Test FROM ....
So, is there a way in SQL Server 2012, of either being able to do
SELECT * INTO #Test FROM (dynamic SQL statement)
or to bypass the parser thinking you're creating the object twice
DECLARE #Table NVARCHAR(20) = 'TableA'
IF #Table = 'TableA'
SELECT * INTO #Test FROM TableA WHERE 1 = 2 -- create an empty schema
IF #Table = 'TableB'
SELECT * INTO #Test FROM TableB WHERE 1 = 2 -- create an empty schema
or to dynamically create the locally scoped temp table, from an existing database table's schema, where the table name is stored in a variable (all the examples I've found of this use the "SELECT * INTO #Test" code, which as I mentioned requires a statically defined object to create from)?
-------edit--------
For a bit of context, here's an example of why we're doing this:-
A trigger may fire producing a warning e-mail if a certain item type is transacted into a certain location. This works with our current triggers. The reason we're doing this is so that we can, in future, write a UI so the users can add other item types to this list themselves, rather than us having to update the trigger - this also means that we can control/validate the SQL being generated, behind the scenes of a point-and-click interface so that our users don't need to know any SQL and that we can be sure that nothing malicious or that will cause errors will be used.
We also can't do this in the BLL because it's from our ERP system and this would then mean we'd have to make changes to base objects, which is obviously undesirable if it can be avoided.
There is the potential for some of these e-mails to be missed/ignored/forgotten/not-actioned, so the users requested the same information on a periodic basis, as well as as-at the transaction occurring:-
So, next, we want to produce, for some of these trigger statements, daily/weekly/monthly reports. Now, obviously, it would be ideal if we could use the existing SQL trigger statements we have set up as then if one were changed it would then automatically affect the periodical reports - stay DRY. It would also mean that if we set up a new trigger, we could automatically include it in the reports by merely inserting a reference to the trigger code, along with the table name, frequency, etc, into the table that drives the periodical reports stored procedure. Again, in future, we could then write a UI, so that users can then request and schedule these reports themselves, with no intervention required from us.
I suspect I'm stuck in a catch-22 situation here. However, I've found a way around it that isn't too messy. I extract the item processing code into another stored procedure, and then compound execution of that onto the dynamic "SELECT INTO" statement - that way it runs in the same execution instance and thus has access to the temp table created in, and local to, that instance:-
SET #SQL = 'SELECT * INTO #TestTable FROM ' + #Table + ' WHERE ' + #WhereClause
SET #SQL = #SQL + '; EXEC ReportProcess'
EXECUTE sp_executeSQL #SQL
the ReportProcess stored procedure then has access to the temporary table and can process it, accordingly

Dynamically created temporary table does not persist

I want to create a temporary table in a dynamic query and use it afterwards. It will be created from a permanent table:
create table t (a integer);
insert into t values (1);
And the temp table creation is like this:
declare #command varchar(max) = '
select *
into #t
from t
;
select * from #t;
';
execute (#command);
When the #command is executed the select from the temporary table works.
Now if I select from the temporary table an error message is shown:
select * from #t;
Invalid object name '#t'
If the temporary table is created outside of the dynamic query it works:
select top 0 *
into #t
from t
declare #command varchar(max) = '
insert into #t
select *
from t
';
execute (#command);
select * from #t;
Is it possible to persist a dynamically created temporary table?
You are close in your assumption that EXECUTE is carried out in a different session.
According to the MSDN here
Executes a command string or character string within a Transact-SQL
batch
So your temporary table only exists inside the scope of the SQL executed by the EXECUTE command.
You can also create global temporary tables. For example, ##MyTemp.
But, global temporary tables are visible to all SQL Server connections.

T-SQL Dynamic SQL and Temp Tables

It looks like #temptables created using dynamic SQL via the EXECUTE string method have a different scope and can't be referenced by "fixed" SQLs in the same stored procedure.
However, I can reference a temp table created by a dynamic SQL statement in a subsequence dynamic SQL but it seems that a stored procedure does not return a query result to a calling client unless the SQL is fixed.
A simple 2 table scenario:
I have 2 tables. Let's call them Orders and Items. Order has a Primary key of OrderId and Items has a Primary Key of ItemId. Items.OrderId is the foreign key to identify the parent Order. An Order can have 1 to n Items.
I want to be able to provide a very flexible "query builder" type interface to the user to allow the user to select what Items he want to see. The filter criteria can be based on fields from the Items table and/or from the parent Order table. If an Item meets the filter condition including and condition on the parent Order if one exists, the Item should be return in the query as well as the parent Order.
Usually, I suppose, most people would construct a join between the Item table and the parent Order tables. I would like to perform 2 separate queries instead. One to return all of the qualifying Items and the other to return all of the distinct parent Orders. The reason is two fold and you may or may not agree.
The first reason is that I need to query all of the columns in the parent Order table and if I did a single query to join the Orders table to the Items table, I would be repoeating the Order information multiple times. Since there are typically a large number of items per Order, I'd like to avoid this because it would result in much more data being transfered to a fat client. Instead, as mentioned, I would like to return the two tables individually in a dataset and use the two tables within to populate a custom Order and child Items client objects. (I don't know enough about LINQ or Entity Framework yet. I build my objects by hand). The second reason I would like to return two tables instead of one is because I already have another procedure that returns all of the Items for a given OrderId along with the parent Order and I would like to use the same 2-table approach so that I could reuse the client code to populate my custom Order and Client objects from the 2 datatables returned.
What I was hoping to do was this:
Construct a dynamic SQL string on the Client which joins the orders table to the Items table and filters appropriate on each table as specified by the custom filter created on the Winform fat-client app. The SQL build on the client would have looked something like this:
TempSQL = "
INSERT INTO #ItemsToQuery
OrderId, ItemsId
FROM
Orders, Items
WHERE
Orders.OrderID = Items.OrderId AND
/* Some unpredictable Order filters go here */
AND
/* Some unpredictable Items filters go here */
"
Then, I would call a stored procedure,
CREATE PROCEDURE GetItemsAndOrders(#tempSql as text)
Execute (#tempSQL) --to create the #ItemsToQuery table
SELECT * FROM Items WHERE Items.ItemId IN (SELECT ItemId FROM #ItemsToQuery)
SELECT * FROM Orders WHERE Orders.OrderId IN (SELECT DISTINCT OrderId FROM #ItemsToQuery)
The problem with this approach is that #ItemsToQuery table, since it was created by dynamic SQL, is inaccessible from the following 2 static SQLs and if I change the static SQLs to dynamic, no results are passed back to the fat client.
3 around come to mind but I'm look for a better one:
1) The first SQL could be performed by executing the dynamically constructed SQL from the client. The results could then be passed as a table to a modified version of the above stored procedure. I am familiar with passing table data as XML. If I did this, the stored proc could then insert the data into a temporary table using a static SQL that, because it was created by dynamic SQL, could then be queried without issue. (I could also investigate into passing the new Table type param instead of XML.) However, I would like to avoid passing up potentially large lists to a stored procedure.
2) I could perform all the queries from the client.
The first would be something like this:
SELECT Items.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter)
SELECT Orders.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter)
This still provides me with the ability to reuse my client sided object-population code because the Orders and Items continue to be returned in two different tables.
I have a feeling to, that I might have some options using a Table data type within my stored proc, but that is also new to me and I would appreciate a little bit of spoon feeding on that one.
If you even scanned this far in what I wrote, I am surprised, but if so, I woul dappreciate any of your thoughts on how to accomplish this best.
You first need to create your table first then it will be available in the dynamic SQL.
This works:
CREATE TABLE #temp3 (id INT)
EXEC ('insert #temp3 values(1)')
SELECT *
FROM #temp3
This will not work:
EXEC (
'create table #temp2 (id int)
insert #temp2 values(1)'
)
SELECT *
FROM #temp2
In other words:
Create temp table
Execute proc
Select from temp table
Here is complete example:
CREATE PROC prTest2 #var VARCHAR(100)
AS
EXEC (#var)
GO
CREATE TABLE #temp (id INT)
EXEC prTest2 'insert #temp values(1)'
SELECT *
FROM #temp
1st Method - Enclose multiple statements in the same Dynamic SQL Call:
DECLARE #DynamicQuery NVARCHAR(MAX)
SET #DynamicQuery = 'Select * into #temp from (select * from tablename) alias
select * from #temp
drop table #temp'
EXEC sp_executesql #DynamicQuery
2nd Method - Use Global Temp Table:
(Careful, you need to take extra care of global variable.)
IF OBJECT_ID('tempdb..##temp2') IS NULL
BEGIN
EXEC (
'create table ##temp2 (id int)
insert ##temp2 values(1)'
)
SELECT *
FROM ##temp2
END
Don't forget to delete ##temp2 object manually once your done with it:
IF (OBJECT_ID('tempdb..##temp2') IS NOT NULL)
BEGIN
DROP Table ##temp2
END
Note: Don't use this method 2 if you don't know the full structure on database.
I had the same issue that #Muflix mentioned. When you don't know the columns being returned, or they are being generated dynamically, what I've done is create a global table with a unique id, then delete it when I'm done with it, this looks something like what's shown below:
DECLARE #DynamicSQL NVARCHAR(MAX)
DECLARE #DynamicTable VARCHAR(255) = 'DynamicTempTable_' + CONVERT(VARCHAR(36), NEWID())
DECLARE #DynamicColumns NVARCHAR(MAX)
--Get "#DynamicColumns", example: SET #DynamicColumns = '[Column1], [Column2]'
SET #DynamicSQL = 'SELECT ' + #DynamicColumns + ' INTO [##' + #DynamicTable + ']' +
' FROM [dbo].[TableXYZ]'
EXEC sp_executesql #DynamicSQL
SET #DynamicSQL = 'IF OBJECT_ID(''tempdb..##' + #DynamicTable + ''' , ''U'') IS NOT NULL ' +
' BEGIN DROP TABLE [##' + #DynamicTable + '] END'
EXEC sp_executesql #DynamicSQL
Certainly not the best solution, but this seems to work for me.
I would strongly suggest you have a read through http://www.sommarskog.se/arrays-in-sql-2005.html
Personally I like the approach of passing a comma delimited text list, then parsing it with text to table function and joining to it. The temp table approach can work if you create it first in the connection. But it feel a bit messier.
Result sets from dynamic SQL are returned to the client. I have done this quite a lot.
You're right about issues with sharing data through temp tables and variables and things like that between the SQL and the dynamic SQL it generates.
I think in trying to get your temp table working, you have probably got some things confused, because you can definitely get data from a SP which executes dynamic SQL:
USE SandBox
GO
CREATE PROCEDURE usp_DynTest(#table_type AS VARCHAR(255))
AS
BEGIN
DECLARE #sql AS VARCHAR(MAX) = 'SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = ''' + #table_type + ''''
EXEC (#sql)
END
GO
EXEC usp_DynTest 'BASE TABLE'
GO
EXEC usp_DynTest 'VIEW'
GO
DROP PROCEDURE usp_DynTest
GO
Also:
USE SandBox
GO
CREATE PROCEDURE usp_DynTest(#table_type AS VARCHAR(255))
AS
BEGIN
DECLARE #sql AS VARCHAR(MAX) = 'SELECT * INTO #temp FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = ''' + #table_type + '''; SELECT * FROM #temp;'
EXEC (#sql)
END
GO
EXEC usp_DynTest 'BASE TABLE'
GO
EXEC usp_DynTest 'VIEW'
GO
DROP PROCEDURE usp_DynTest
GO

SQL Server 2008: Creating dynamic Synonyms?

in my SQL Server 2008 database I have a number of different tables with the same structure. I query them in different stored procedures. My first try was to pass the table name to the stored procedure, like:
CREATE PROCEDURE MyTest
#tableName nvarchar(255)
AS
BEGIN
SELECT * FROM #tableName
END
But we can't use parameters for table names in SQL. So I asked you and tried the solution with using Synonyms instead of a parameter for the table name:
CREATE PROCEDURE MyTest
#tableName nvarchar(255)
AS
BEGIN
EXEC SetSimilarityTableNameSynonym #tbl = #tableName;
SELECT * FROM dbo.CurrentSimilarityTable
END
SetSimilarityTableNameSynonym is a SP to set the Synonym dbo.CurrentSimilarityTable to the passed value (the specific table name). It looks like:
CREATE PROCEDURE [dbo].[SetSimilarityTableNameSynonym]
#tbl nvarchar(255)
AS
BEGIN
IF object_id('dbo.CurrentSimilarityTable', 'SN') IS NOT NULL
DROP SYNONYM CurrentSimilarityTable;
-- Set the synonym for each existing table
IF #tbl = 'byArticle'
CREATE SYNONYM dbo.CurrentSimilarityTable FOR dbo.similarity_byArticle;
...
END
Now, as you probably see, the problem is with concurrent access to the SPs which will "destroy" each others assigned synonym. So I tried to create dynamic synonyms for each single SP-call with a GUID via NewID()
DECLARE #theGUID uniqueidentifier;
SET #theGUID=NEWID()
SET #theSynonym = 'dbo.SimTabSyn_' + CONVERT(nvarchar(255), #theGUID);
BUT ... I can't use the dynamical created name to create a synonym:
CREATE SYNONYM #theSynonym FOR dbo.similarity_byArticle;
doesn't work.
Has anybody an idea, how to get dynamical synonyms running? Is this even possible?
Thanks in advance,
Frank
All I can suggest is to run the CREATE SYNONYM in dynamic SQL. And this also means your code is running at quite high rights (db_owner or ddl_admin). You may need EXECUTE AS OWNER to allow it when you secure the code.
And how many synonyms will you end up with for the same table? If you have to do it this way, I'd use OBJECT_ID not NEWID and test first so you have one synonym per table.
But if you have one synonym per table then why not use the table name...?
What is the point is there creating 1 or more synonyms for the same table, given the table names are already unique...
I'd fix the database design.
Why would you want multiple concurrent users to overwrite the single resource (synonym)?
If your MyTest procedure is taking a the table name as a parameter, why not simply do dynamic SQL? You can validate the #tableName against against a hardcoded list of tables that this procedure is allowed to select from, or against sys.tables

Resources