Snowflake conditional code: adding new column(idempotent script) - snowflake-cloud-data-platform

Let's assume we have a table that contains data as below:
CREATE TABLE tab(i INT PRIMARY KEY);
INSERT INTO tab(i) VALUES(1),(2),(3);
SELECT * FROM tab;
Now my goal is to create SQL script that will add a new column to existing table:
ALTER TABLE IF EXISTS tab ADD COLUMN col VARCHAR(10);
Everything works as intended. Except the fact I would like to be able to run script multiple times but the effect should take place only once(idempotence).
If I try to run it again I will get:
SQL compilation error: column COL already exists
Normally I would use one of these approaches:
a) Using control structure IF to check metadata tables before executing query:
-- (T-SQL)
IF NOT EXISTS(SELECT * FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME='TAB' AND COLUMN_NAME = 'COL')
BEGIN
ALTER TABLE tab ADD col VARCHAR(10);
END;
db<>fiddle demo
I have not found IF statement in Snowflake's documentation.
b) SQL dialect that supports IF NOT EXISTS syntax:
-- PostgreSQL
ALTER TABLE IF EXISTS tab ADD COLUMN IF NOT EXISTS col VARCHAR(10);
db<>fiddle demo
Most of Snowflake SQL commands contain IF EXISTS/OR REPLACE clauses which means it was written in a way to allow running scripts multiple times.
I was considering using code like:
CREATE OR REPLACE TABLE tab
AS
SELECT i, CAST(NULL AS VARCHAR(10)) AS col
FROM tab;
This approach on other hand causes unnecessary table creation and does not preserve metadata(like primary key).
Is there a way to achieve similar effect on Snowflake? Preferably by using conditional code(add column is an example).

You can use something like this. It will report the failure to add the column if it already exists, but it will handle the error so it won't interfere with the execution of a sql script:
create or replace procedure SafeAddColumn(tableName string, columnName string, columnType string)
returns string
language JavaScript
as
$$
var sql_command = "ALTER TABLE IF EXISTS " + TABLENAME + " ADD COLUMN " + COLUMNNAME + " " + COLUMNTYPE + ";";
var strOut;
try {
var stmt = snowflake.createStatement( {sqlText: sql_command} );
var resultSet = stmt.execute();
while (resultSet.next()) {
strOut = resultSet.getColumnValue(1);
}
}
catch (err) {
strOut = "Failed: " + err; // Return a success/error indicator.
}
return strOut;
$$;
CREATE OR REPLACE TABLE tab(i INT PRIMARY KEY);
INSERT INTO tab(i) VALUES(1),(2),(3);
SELECT * FROM tab;
call SafeAddColumn('tab', 'col', 'varchar(10)');
select * from tab;
call SafeAddColumn('tab', 'col', 'varchar(10)');

It is possible to write conditional code using Snowflake Scripting.
Working with Branching Constructs
Snowflake Scripting supports the following branching constructs:
IF-THEN-ELSEIF-ELSE
CASE
Setup:
CREATE OR REPLACE TABLE PUBLIC.tab(i INT PRIMARY KEY);
INSERT INTO tab(i) VALUES(1),(2);
SELECT * FROM tab;
-- i
-- 1
-- 2
Code that can be rerun multiple times(subsequent runs will take no effect):
-- Snowsight
BEGIN
IF (NOT EXISTS(SELECT *
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'TAB'
AND TABLE_SCHEMA = 'PUBLIC'
AND COLUMN_NAME = 'COL')) THEN
ALTER TABLE IF EXISTS tab ADD COLUMN col VARCHAR(10);
END IF;
END;
EXECUTE IMMEDIATE is required is run using "classic web interface":
EXECUTE IMMEDIATE $$
BEGIN
IF (NOT EXISTS(SELECT *
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'TAB'
AND TABLE_SCHEMA = 'PUBLIC'
AND COLUMN_NAME = 'COL')) THEN
ALTER TABLE IF EXISTS tab ADD COLUMN col VARCHAR(10);
END IF;
END;
$$
After:
SELECT * FROM tab;
-- i col
-- 1 NULL
-- 2 NULL

Although Snowflake has implemented a pretty rich mix of DDL and DML for their SQL implementation, when it comes to procedural code they seem to be relying on JavaScript, at least at this point. But you should be able to accomplish your idempotent ALTER script through a JavaScript stored procedure.
I'm afraid I lack the JavaScript skills to provide you with a working sample myself at this point. The organization I'm with recently adopted Snowflake, though, so I'll share some of my research.
Here's a recent blog post on just this question:
Snowflake Control Structures – IF, DO, WHILE, FOR
Snowflake's overview documentation regarding stored procedures:
Stored Procedures
On the page above, what is currently the third link down contains extensive sample code.
Working With Stored Procedures

Building on Lukasz answer, to include database in condition you can use:
execute immediate $$
BEGIN
IF (
NOT EXISTS(
SELECT *
FROM "INFORMATION_SCHEMA"."COLUMNS"
WHERE
"TABLE_CATALOG" = 'DB_NAME'
AND "TABLE_SCHEMA" = 'SCHEMA_NAME'
AND "TABLE_NAME" = 'TABLE_NAME'
AND "COLUMN_NAME" = 'col_name'
)
) THEN
ALTER TABLE IF EXISTS "DB_NAME"."SCHEMA_NAME"."TABLE_NAME"
ADD COLUMN "col_name" VARCHAR NULL;
END IF;
END;
$$;

Related

Is there any equivalent in Snowflake for OBJECT_ID() in SQL Server?

The OBJECT_ID function Returns the database object identification number of a schema-scoped object in SQL SERVER.
Could anyone suggest an equivalent function in Snowflake that can be used inside a stored procedure?
I need to migrate the below code to Snowflake:
CREATE PROCEDURE test_procedure
(#Var1 INT )
AS
BEGIN
IF #Var1 = 1
BEGIN
IF OBJECT_ID('db1.Table1') IS NOT NULL
DROP TABLE Table1;
END;
END;
The pattern:
IF OBJECT_ID('db1.Table1') IS NOT NULL
DROP TABLE Table1;
is the old way to check if table exists before trying to drop it.
Currently both SQL Server and Snowflake supports IF EXISTS clause:
DROP TABLE IF EXISTS <table_name>;
db<>fiddle demo
The closest I can think of will be to use a function like:
CREATE OR REPLACE FUNCTION OBJECT_ID(NAME VARCHAR) RETURNS STRING
LANGUAGE SQL
AS
$$
SELECT OBJECT_SCHEMA || '.' || OBJECT_NAME FROM INFORMATION_SCHEMA.OBJECT_PRIVILEGES WHERE
(OBJECT_SCHEMA || '.' || OBJECT_NAME) = NAME
$$;
The snowflake object_privileges view is the closest information_schema view listing all db elements.
However as noted on previous answers, the IF EXISTS on create and drop statements, makes the usage of this function unnecessary

How should I test for the existence of a table and fail a script if the table does not exist?

This is what I came up with. Seems very hacky but it is what someone in the Snowflake forums suggested. Surely there's a better way.
use database baz;
-- divide by zero hack to check the existence of a table and bail if it isn't present
-- improvements to this welcome, snow sql doesn't have a clean way to bail like raiseerror
select 1/(select count(*) from information_schema.tables
where table_schema = 'foo'
and table_name = 'bar')
This script is intended to run after a setup script. This is here to make sure the required tables exist after the script has been run.
The main point is that as for today Snowflake SQL does not support control structures(IF/WHILE/FOR/SWITCH/TRY/CATCH). It will probably change in the future but as for now you could use Java Script stored procedures.
Overview of Stored Procedures
Snowflake stored procedures use JavaScript and, in most cases, SQL:
JavaScript provides the control structures (branching and looping).
SQL is executed by calling functions in a JavaScript API.
The pseudocode:
CREATE OR REPLACE PROCEDURE my_proc(...)
RETURNS STRING
LANGUAGE JAVASCRIPT
AS
$$
var tab_exists = `select count(*) from information_schema.tables
where table_schema ILIKE 'foo'
and table_name ILIKE 'bar'`;
var stmt = snowflake.createStatement( {sqlText: tab_exists} );
var resultSet = stmt.execute();
resultSet.next();
var result = resultSet.getColumnValue(1);
if (result > 0){
...
}
else {
return 'table not found';
};
$$;
-- invocation
use database baz;
call my_proc();
EDIT:
Using Snowflake Scripting is much easier to write such scripts:
DECLARE
my_exception EXCEPTION(-20001, 'table_not_found');
BEGIN
IF (EXISTS (SELECT *
FROM information_schema.tables
WHERE table_schema ILIKE 'PUBLIC'
AND table_name ILIKE 'NON_EXISTING_TAB'))
THEN
RETURN 'TABLE_EXISTS';
ELSE
--RETURN 'TABLE_NOT_FOUND';
RAISE my_exception;
END IF;
END;
If you are testing for table existence by expecting an exception if the table doesn't exist, you could just run a select on the table.
This will generate an exception if the table doesn't exist:
select count(*)
from schema.table
The proposal in the question to look on the information_schema is cool if you need a query that doesn't generate an exception, but once you divide by 0, it's clear that you were asking for one.

MSSQL Stored Procedure Creating A Temp Table Dynamically

We're trying to write some automated reports to execute SQL statements we have stored in a table. The table data is normally used in a stored procedure called by the triggers and uses data passed in via temp tables (created in the trigger statements), and has a table name, then an SQL statement that works on #TempInserted and #TempDeleted, which correspond to the Inserted and Deleted objects from the trigger and then some e-mail columns that determine where to send the output.
This all works fine from the trigger statements, as each creates each temp table once, during execution:-
SELECT * INTO #TempInserted FROM INSERTED
SELECT * INTO #TempDeleted FROM DELETED
Then the trigger calls the TriggerHandler stored procedure, passing the table name through as a pararmeter.
..
However, when I try to create these dynamically from a general stored procedure in order to fire off these statements as reports (so we don't duplicate the statements), in a batch, I'm hitting a problem:-
SELECT * INTO #TempInserted FROM ...
works fine from a defined table, or object (e.g. "FROM INSERTED"), but I've found that it can't get it's schema from a dynamic query.
For example, I can do
SELECT TOP 1 * INTO #Test FROM TableA
SELECT * FROM #Test
DROP TABLE #Test
But I can't then do
EXECUTE sp_executesql N'SELECT TOP 1 * INTO #Test FROM TableA'
SELECT * FROM #Test
DROP TABLE #Test
because then #Test is local to the EXECUTE context, and not its parent.
I can, however, do the insert in the EXECUTE (or a stored procedure) because the temp table is in scope, if I've already created the table schema:-
SELECT * INTO #Test FROM TableA WHERE 1 = 2 -- create an empty schema
EXECUTE sp_executesql N'INSERT INTO #Test SELECT TOP 10 * FROM TableA'
SELECT * FROM #Test
DROP TABLE #Test
So, that's OK, but my problem comes when I want to dynamically create that schema, depending on the table name were running the reports for. The INSERT works:-
SELECT * INTO #Test FROM TableA WHERE 1 = 2 -- create an empty schema
DECLARE #Table NVARCHAR(20) = 'TableA'
DECLARE #SQL NVARCHAR(200) = N'INSERT INTO #Test SELECT TOP 10 * FROM ' + #Table
EXECUTE sp_executesql #SQL
SELECT * FROM #Test
DROP TABLE #Test
But only if the temp table already has a schema. If I try to conditionally create the schema, depending on the table selected, I get a parsing error:-
DECLARE #Table NVARCHAR(20) = 'TableA'
IF #Table = 'TableA'
SELECT * INTO #Test FROM TableA WHERE 1 = 2 -- create an empty schema
IF #Table = 'TableB'
SELECT * INTO #Test FROM TableB WHERE 1 = 2 -- create an empty schema
DECLARE #SQL NVARCHAR(200) = N'INSERT INTO #Test SELECT TOP 10 * FROM ' + #Table
EXECUTE sp_executesql #SQL
SELECT * FROM #Test
DROP TABLE #Test
gives "There is already an object named '#Test' in the database." - so the query parser isn't following the structure of the query, which only actually creates the temp table once. This also holds true if you do
SELECT * INTO #Test FROM ....
DROP TABLE #Test
SELECT * INTO #Test FROM ....
So, is there a way in SQL Server 2012, of either being able to do
SELECT * INTO #Test FROM (dynamic SQL statement)
or to bypass the parser thinking you're creating the object twice
DECLARE #Table NVARCHAR(20) = 'TableA'
IF #Table = 'TableA'
SELECT * INTO #Test FROM TableA WHERE 1 = 2 -- create an empty schema
IF #Table = 'TableB'
SELECT * INTO #Test FROM TableB WHERE 1 = 2 -- create an empty schema
or to dynamically create the locally scoped temp table, from an existing database table's schema, where the table name is stored in a variable (all the examples I've found of this use the "SELECT * INTO #Test" code, which as I mentioned requires a statically defined object to create from)?
-------edit--------
For a bit of context, here's an example of why we're doing this:-
A trigger may fire producing a warning e-mail if a certain item type is transacted into a certain location. This works with our current triggers. The reason we're doing this is so that we can, in future, write a UI so the users can add other item types to this list themselves, rather than us having to update the trigger - this also means that we can control/validate the SQL being generated, behind the scenes of a point-and-click interface so that our users don't need to know any SQL and that we can be sure that nothing malicious or that will cause errors will be used.
We also can't do this in the BLL because it's from our ERP system and this would then mean we'd have to make changes to base objects, which is obviously undesirable if it can be avoided.
There is the potential for some of these e-mails to be missed/ignored/forgotten/not-actioned, so the users requested the same information on a periodic basis, as well as as-at the transaction occurring:-
So, next, we want to produce, for some of these trigger statements, daily/weekly/monthly reports. Now, obviously, it would be ideal if we could use the existing SQL trigger statements we have set up as then if one were changed it would then automatically affect the periodical reports - stay DRY. It would also mean that if we set up a new trigger, we could automatically include it in the reports by merely inserting a reference to the trigger code, along with the table name, frequency, etc, into the table that drives the periodical reports stored procedure. Again, in future, we could then write a UI, so that users can then request and schedule these reports themselves, with no intervention required from us.
I suspect I'm stuck in a catch-22 situation here. However, I've found a way around it that isn't too messy. I extract the item processing code into another stored procedure, and then compound execution of that onto the dynamic "SELECT INTO" statement - that way it runs in the same execution instance and thus has access to the temp table created in, and local to, that instance:-
SET #SQL = 'SELECT * INTO #TestTable FROM ' + #Table + ' WHERE ' + #WhereClause
SET #SQL = #SQL + '; EXEC ReportProcess'
EXECUTE sp_executeSQL #SQL
the ReportProcess stored procedure then has access to the temporary table and can process it, accordingly

How to create trigger for all table in postgresql?

I have a trigger, but I need to associate with all tables of the my postgres.
Is there a command like this below?
CREATE TRIGGER delete_data_alldb
BEFORE DELETE
ON ALL DATABASE
FOR EACH ROW
EXECUTE PROCEDURE delete_data();
Well there is no database-wide trigger creation but for all such bulk-admin-operations you could use PostgreSQL system tables to generate queries for you instead of writing them by hand.
In this case you could run:
SELECT
'CREATE TRIGGER '
|| tab_name
|| ' BEFORE DELETE ON ALL DATABASE FOR EACH ROW EXECUTE PROCEDURE delete_data();' AS trigger_creation_query
FROM (
SELECT
quote_ident(table_schema) || '.' || quote_ident(table_name) as tab_name
FROM
information_schema.tables
WHERE
table_schema NOT IN ('pg_catalog', 'information_schema')
AND table_schema NOT LIKE 'pg_toast%'
) tablist;
This will get you set of strings which are SQL commands like:
CREATE TRIGGER schema1.table1 BEFORE DELETE ON ALL DATABASE FOR EACH ROW EXECUTE PROCEDURE delete_data();
CREATE TRIGGER schema1.table2 BEFORE DELETE ON ALL DATABASE FOR EACH ROW EXECUTE PROCEDURE delete_data();
CREATE TRIGGER schema1.table3 BEFORE DELETE ON ALL DATABASE FOR EACH ROW EXECUTE PROCEDURE delete_data();
CREATE TRIGGER schema2.table1 BEFORE DELETE ON ALL DATABASE FOR EACH ROW EXECUTE PROCEDURE delete_data();
CREATE TRIGGER schema2."TABLE2" BEFORE DELETE ON ALL DATABASE FOR EACH ROW EXECUTE PROCEDURE delete_data();
...
etc
You just need to run them at once (either by psql or pgAdmin).
Now some explanation:
I select names of tables in my database using information_schema.tables system table. Because there are data of literally all tables, remember to exclude pg_catalog and information_schema schemas and toast tables from your select.
I use quote_ident(text) function which will put string inside double quote signs ("") if necessary (ie. names with spaces or capital letters require that).
When I have list of tables names I just concatenate them with some static strings to get my SQL commands.
I write that command using sub-query because I want you to get better idea of what's going on here. You may write a single query by putting quote_ident(table_schema) || '.' || quote_ident(table_name) in place of tab_name.
A conveniently encapsulated version of Gabriel's answer. This time I am using the trigger to update a column named update_dt datetime granted to be part of any table in the public schema of the current database.
--
-- function: tg_any_update_datetime_fn
-- when: before insert or update
--
create or replace function tg_any_update_datetime_fn ()
returns trigger
language plpgsql as $$
begin
new.update_dt = now();
return new;
end;
$$;
--
-- function: ddl_create_before_update_trigger_on_all_tables
-- returns: Create a before update trigger on all tables.
--
create or replace procedure ddl_create_before_update_trigger_on_all_tables ()
language plpgsql as $$
declare
_sql varchar;
begin
for _sql in select concat (
'create trigger tg_',
quote_ident(table_name),
'_before_update before update on ',
quote_ident(table_name),
' for each row execute procedure tg_any_update_datetime_fn ();'
)
from
information_schema.tables
where
table_schema not in ('pg_catalog', 'information_schema') and
table_schema not like 'pg_toast%'
loop
execute _sql;
end loop;
end;
$$;
-- create before update trigger on all tables
call ddl_create_before_update_trigger_on_all_tables();
On my DDL scripts I use a large number of such ddl_ functions that have only meaning at DDL time. To remove them from the database use
--
-- function: ddl_drop_ddl_functions
-- returns: Drop all DDL functions.
-- since: 1.1.20
--
create or replace procedure ddl_drop_ddl_functions ()
language plpgsql as $$
declare
r record;
_sql varchar;
begin
for r in
select oid, prokind, proname
from pg_proc
where pronamespace = 'public'::regnamespace
and proname ilike 'ddl_%'
loop
case r.prokind
when 'a' then _sql = 'aggregate';
when 'p' then _sql = 'procedure';
else _sql = 'function';
end case;
_sql = format('drop %s %s', _sql, r.oid::regprocedure);
execute _sql;
end loop;
end
$$;

T-SQL Dynamic SQL and Temp Tables

It looks like #temptables created using dynamic SQL via the EXECUTE string method have a different scope and can't be referenced by "fixed" SQLs in the same stored procedure.
However, I can reference a temp table created by a dynamic SQL statement in a subsequence dynamic SQL but it seems that a stored procedure does not return a query result to a calling client unless the SQL is fixed.
A simple 2 table scenario:
I have 2 tables. Let's call them Orders and Items. Order has a Primary key of OrderId and Items has a Primary Key of ItemId. Items.OrderId is the foreign key to identify the parent Order. An Order can have 1 to n Items.
I want to be able to provide a very flexible "query builder" type interface to the user to allow the user to select what Items he want to see. The filter criteria can be based on fields from the Items table and/or from the parent Order table. If an Item meets the filter condition including and condition on the parent Order if one exists, the Item should be return in the query as well as the parent Order.
Usually, I suppose, most people would construct a join between the Item table and the parent Order tables. I would like to perform 2 separate queries instead. One to return all of the qualifying Items and the other to return all of the distinct parent Orders. The reason is two fold and you may or may not agree.
The first reason is that I need to query all of the columns in the parent Order table and if I did a single query to join the Orders table to the Items table, I would be repoeating the Order information multiple times. Since there are typically a large number of items per Order, I'd like to avoid this because it would result in much more data being transfered to a fat client. Instead, as mentioned, I would like to return the two tables individually in a dataset and use the two tables within to populate a custom Order and child Items client objects. (I don't know enough about LINQ or Entity Framework yet. I build my objects by hand). The second reason I would like to return two tables instead of one is because I already have another procedure that returns all of the Items for a given OrderId along with the parent Order and I would like to use the same 2-table approach so that I could reuse the client code to populate my custom Order and Client objects from the 2 datatables returned.
What I was hoping to do was this:
Construct a dynamic SQL string on the Client which joins the orders table to the Items table and filters appropriate on each table as specified by the custom filter created on the Winform fat-client app. The SQL build on the client would have looked something like this:
TempSQL = "
INSERT INTO #ItemsToQuery
OrderId, ItemsId
FROM
Orders, Items
WHERE
Orders.OrderID = Items.OrderId AND
/* Some unpredictable Order filters go here */
AND
/* Some unpredictable Items filters go here */
"
Then, I would call a stored procedure,
CREATE PROCEDURE GetItemsAndOrders(#tempSql as text)
Execute (#tempSQL) --to create the #ItemsToQuery table
SELECT * FROM Items WHERE Items.ItemId IN (SELECT ItemId FROM #ItemsToQuery)
SELECT * FROM Orders WHERE Orders.OrderId IN (SELECT DISTINCT OrderId FROM #ItemsToQuery)
The problem with this approach is that #ItemsToQuery table, since it was created by dynamic SQL, is inaccessible from the following 2 static SQLs and if I change the static SQLs to dynamic, no results are passed back to the fat client.
3 around come to mind but I'm look for a better one:
1) The first SQL could be performed by executing the dynamically constructed SQL from the client. The results could then be passed as a table to a modified version of the above stored procedure. I am familiar with passing table data as XML. If I did this, the stored proc could then insert the data into a temporary table using a static SQL that, because it was created by dynamic SQL, could then be queried without issue. (I could also investigate into passing the new Table type param instead of XML.) However, I would like to avoid passing up potentially large lists to a stored procedure.
2) I could perform all the queries from the client.
The first would be something like this:
SELECT Items.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter)
SELECT Orders.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter)
This still provides me with the ability to reuse my client sided object-population code because the Orders and Items continue to be returned in two different tables.
I have a feeling to, that I might have some options using a Table data type within my stored proc, but that is also new to me and I would appreciate a little bit of spoon feeding on that one.
If you even scanned this far in what I wrote, I am surprised, but if so, I woul dappreciate any of your thoughts on how to accomplish this best.
You first need to create your table first then it will be available in the dynamic SQL.
This works:
CREATE TABLE #temp3 (id INT)
EXEC ('insert #temp3 values(1)')
SELECT *
FROM #temp3
This will not work:
EXEC (
'create table #temp2 (id int)
insert #temp2 values(1)'
)
SELECT *
FROM #temp2
In other words:
Create temp table
Execute proc
Select from temp table
Here is complete example:
CREATE PROC prTest2 #var VARCHAR(100)
AS
EXEC (#var)
GO
CREATE TABLE #temp (id INT)
EXEC prTest2 'insert #temp values(1)'
SELECT *
FROM #temp
1st Method - Enclose multiple statements in the same Dynamic SQL Call:
DECLARE #DynamicQuery NVARCHAR(MAX)
SET #DynamicQuery = 'Select * into #temp from (select * from tablename) alias
select * from #temp
drop table #temp'
EXEC sp_executesql #DynamicQuery
2nd Method - Use Global Temp Table:
(Careful, you need to take extra care of global variable.)
IF OBJECT_ID('tempdb..##temp2') IS NULL
BEGIN
EXEC (
'create table ##temp2 (id int)
insert ##temp2 values(1)'
)
SELECT *
FROM ##temp2
END
Don't forget to delete ##temp2 object manually once your done with it:
IF (OBJECT_ID('tempdb..##temp2') IS NOT NULL)
BEGIN
DROP Table ##temp2
END
Note: Don't use this method 2 if you don't know the full structure on database.
I had the same issue that #Muflix mentioned. When you don't know the columns being returned, or they are being generated dynamically, what I've done is create a global table with a unique id, then delete it when I'm done with it, this looks something like what's shown below:
DECLARE #DynamicSQL NVARCHAR(MAX)
DECLARE #DynamicTable VARCHAR(255) = 'DynamicTempTable_' + CONVERT(VARCHAR(36), NEWID())
DECLARE #DynamicColumns NVARCHAR(MAX)
--Get "#DynamicColumns", example: SET #DynamicColumns = '[Column1], [Column2]'
SET #DynamicSQL = 'SELECT ' + #DynamicColumns + ' INTO [##' + #DynamicTable + ']' +
' FROM [dbo].[TableXYZ]'
EXEC sp_executesql #DynamicSQL
SET #DynamicSQL = 'IF OBJECT_ID(''tempdb..##' + #DynamicTable + ''' , ''U'') IS NOT NULL ' +
' BEGIN DROP TABLE [##' + #DynamicTable + '] END'
EXEC sp_executesql #DynamicSQL
Certainly not the best solution, but this seems to work for me.
I would strongly suggest you have a read through http://www.sommarskog.se/arrays-in-sql-2005.html
Personally I like the approach of passing a comma delimited text list, then parsing it with text to table function and joining to it. The temp table approach can work if you create it first in the connection. But it feel a bit messier.
Result sets from dynamic SQL are returned to the client. I have done this quite a lot.
You're right about issues with sharing data through temp tables and variables and things like that between the SQL and the dynamic SQL it generates.
I think in trying to get your temp table working, you have probably got some things confused, because you can definitely get data from a SP which executes dynamic SQL:
USE SandBox
GO
CREATE PROCEDURE usp_DynTest(#table_type AS VARCHAR(255))
AS
BEGIN
DECLARE #sql AS VARCHAR(MAX) = 'SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = ''' + #table_type + ''''
EXEC (#sql)
END
GO
EXEC usp_DynTest 'BASE TABLE'
GO
EXEC usp_DynTest 'VIEW'
GO
DROP PROCEDURE usp_DynTest
GO
Also:
USE SandBox
GO
CREATE PROCEDURE usp_DynTest(#table_type AS VARCHAR(255))
AS
BEGIN
DECLARE #sql AS VARCHAR(MAX) = 'SELECT * INTO #temp FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = ''' + #table_type + '''; SELECT * FROM #temp;'
EXEC (#sql)
END
GO
EXEC usp_DynTest 'BASE TABLE'
GO
EXEC usp_DynTest 'VIEW'
GO
DROP PROCEDURE usp_DynTest
GO

Resources