Assume that I have a table on my local which is Local_Table and I have another server and another db and table, which is Remote_Table (table structures are the same).
Local_Table has data, Remote_Table doesn't. I want to transfer data from Local_Table to Remote_Table with this query:
Insert into RemoteServer.RemoteDb..Remote_Table
select * from Local_Table (nolock)
But the performance is quite slow.
However, when I use SQL Server import-export wizard, transfer is really fast.
What am I doing wrong? Why is it fast with Import-Export wizard and slow with insert-select statement? Any ideas?
The fastest way is to pull the data rather than push it. When the tables are pushed, every row requires a connection, an insert, and a disconnect.
If you can't pull the data, because you have a one way trust relationship between the servers, the work around is to construct the entire table as a giant T-SQL statement and run it all at once.
DECLARE #xml XML
SET #xml = (
SELECT 'insert Remote_Table values (' + '''' + isnull(first_col, 'NULL') + ''',' +
-- repeat for each col
'''' + isnull(last_col, 'NULL') + '''' + ');'
FROM Local_Table
FOR XML path('')
) --This concatenates all the rows into a single xml object, the empty path keeps it from having <colname> </colname> wrapped arround each value
DECLARE #sql AS VARCHAR(max)
SET #sql = 'set nocount on;' + cast(#xml AS VARCHAR(max)) + 'set nocount off;' --Converts XML back to a long string
EXEC ('use RemoteDb;' + #sql) AT RemoteServer
It seems like it's much faster to pull data from a linked server than to push data to a linked server: Which one is more efficient: select from linked server or insert into linked server?
Update: My own, recent experience confirms this. Pull if possible -- it will be much, much faster.
Try this on the other server:
INSERT INTO Local_Table
SELECT * FROM RemoteServer.RemoteDb.Remote_Table
The Import/Export wizard will be essentially doing this as a bulk insert, where as your code is not.
Assuming that you have a Clustered Index on the remote table, make sure that you have the same Clustered index on the local table, set Trace flag 610 globally on your remote server and make sure remote is in Simple or bulk logged recovery mode.
If you're remote table is a Heap (which will speed things up anyway), make sure your remote database is in simple or bulk logged mode change your code to read as follows:
INSERT INTO RemoteServer.RemoteDb..Remote_Table WITH(TABLOCK)
SELECT * FROM Local_Table WITH (nolock)
The reason why it's so slow to insert into the remote table from the local table is because it inserts a row, checks that it inserted, and then inserts the next row, checks that it inserted, etc.
Don't know if you figured this out or not, but here's how I solved this problem using linked servers.
First, I have a LocalDB.dbo.Table with several columns:
IDColumn (int, PK, Auto Increment)
TextColumn (varchar(30))
IntColumn (int)
And I have a RemoteDB.dbo.Table that is almost the same:
IDColumn (int)
TextColumn (varchar(30))
IntColumn (int)
The main difference is that remote IDColumn isn't set up as as an ID column, so that I can do inserts into it.
Then I set up a trigger on remote table that happens on Delete
Create Trigger Table_Del
On Table
After Delete
AS
Begin
Set NOCOUNT ON;
Insert Into Table (IDColumn, TextColumn, IntColumn)
Select IDColumn, TextColumn, IntColumn from MainServer.LocalDB.dbo.table L
Where not exists (Select * from Table R WHere L.IDColumn = R.IDColumn)
END
Then when I want to do an insert, I do it like this from the local server:
Insert Into LocalDB.dbo.Table (TextColumn, IntColumn) Values ('textvalue', 123);
Delete From RemoteServer.RemoteDB.dbo.Table Where IDColumn = 0;
--And if I want to clean the table out and make sure it has all the most up to date data:
Delete From RemoteServer.RemoteDB.dbo.Table
By triggering the remote server to pull the data from the local server and then do the insert, I was able to turn a job that took 30 minutes to insert 1258 lines into a job that took 8 seconds to do the same insert.
This does require a linked server connection on both sides, but after that's set up it works pretty good.
Update:
So in the last few years I've made some changes, and have moved away from the delete trigger as a way to sync the remote table.
Instead I have a stored procedure on the remote server that has all the steps to pull the data from the local server:
CREATE PROCEDURE [dbo].[UpdateTable]
-- Add the parameters for the stored procedure here
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for procedure here
--Fill Temp table
Insert Into WebFileNamesTemp Select * From MAINSERVER.LocalDB.dbo.WebFileNames
--Fill normal table from temp table
Delete From WebFileNames
Insert Into WebFileNames Select * From WebFileNamesTemp
--empty temp table
Delete From WebFileNamesTemp
END
And on the local server I have a scheduled job that does some processing on the local tables, and then triggers the update through the stored procedure:
EXEC sp_serveroption #server='REMOTESERVER', #optname='rpc', #optvalue='true'
EXEC sp_serveroption #server='REMOTESERVER', #optname='rpc out', #optvalue='true'
EXEC REMOTESERVER.RemoteDB.dbo.UpdateTable
EXEC sp_serveroption #server='REMOTESERVER', #optname='rpc', #optvalue='false'
EXEC sp_serveroption #server='REMOTESERVER', #optname='rpc out', #optvalue='false'
If you must push data from the source to the target (e.g., for firewall or other permissions reasons), you can do the following:
In the source database, convert the recordset to a single XML string (i.e., multiple rows and columns combined into a single XML string).
Then push that XML over as a single row (as a varchar(max), since XML isn't allowed over linked databases in SQL Server).
DECLARE #xml XML
SET #xml = (select * from SourceTable FOR XML path('row'))
Insert into TempTargetTable values (cast(#xml AS VARCHAR(max)))
In the target database, cast the varchar(max) as XML and then use XML parsing to turn that single row and column back into a normal recordset.
DECLARE #X XML = (select '<toplevel>' + ImportString + '</toplevel>' from TempTargetTable)
DECLARE #iX INT
EXEC sp_xml_preparedocument #ix output, #x
insert into TargetTable
SELECT [col1],
[col2]
FROM OPENXML(#iX, '//row', 2)
WITH ([col1] [int],
[col2] [varchar](128)
)
EXEC sp_xml_removedocument #iX
I've found a workaround. Since I'm not a big fun of GUI tools like SSIS, I've reused a bcp script to load table into csv and vice versa. Yeah, it's an odd case to have the bulk operation support for files, but tables. Feel free to edit the following script to fit your needs:
exec xp_cmdshell 'bcp "select * from YourLocalTable" queryout C:\CSVFolder\Load.csv -w -T -S .'
exec xp_cmdshell 'bcp YourAzureDBName.dbo.YourAzureTable in C:\CSVFolder\Load.csv -S yourdb.database.windows.net -U youruser#yourdb.database.windows.net -P yourpass -q -w'
Pros:
No need to define table structures every time.
I've tested and it worked way faster than inserting directly through
the LinkedServer.
It's easier to manage than XML (which is limited to
varchar(max) length anyway).
No need of an extra layout of abstraction (tools like SSIS).
Cons:
Using the external tool bcp through the xp_cmdshell interface.
Table properties will be lost after ex/im-poring csv (i.e. datatype, nulls,length, separator within value, etc).
Related
I am trying to execute a procedure with a parameter, and depending on the value of the parameter, three different IF conditions will be evaluated to verify which query it will execute from a linked server.
But when I execute the query, it seems to be checking if the tables inside all the IF exists before starting the query. And I know that only one of the table exists, that is why I am using the parameter, so it shouldn't fail. but I anyhow get the following error:
Msg 7314, Level 16, State 1, Line 25
The OLE DB provider "Microsoft.ACE.OLEDB.16.0" for linked server "LinkedServer" does not contain the table "D100". The table either does not exist or the current user does not have permissions on that table.
So in this code, assume that the parameter is 300. then I get the message above.
Do you know, if there is a way, to limit the query to do not check all the tables, but only the one where the IF condition will be met?
ALTER PROCEDURE[dbo].[Import_data]
#p1 int = 0
AS
BEGIN
SET NOCOUNT ON;
IF(#p1 = 100)
BEGIN
DROP TABLE IF EXISTS Table1
SELECT [Field1], [Field2], [Field3], [Field4], [Field5], [Field6]
INTO Table1
FROM[LinkedServer]...[D100]
END
IF(#p1 = 200)
BEGIN
DROP TABLE IF EXISTS Table2
SELECT[Field1], [Field2], [Field3], [Field4], [Field5], [Field6]
INTO Table2
FROM[LinkedServer]...[D200]
END
IF(#p1 = 300)
BEGIN
DROP TABLE IF EXISTS Table3
SELECT[Field1], [Field2], [Field3], [Field4], [Field5], [Field6]
INTO Table3
FROM[LinkedServer]...[D300]
END
END
I have tried googling it, but I found mostly workarounds as running a sub procedure, but it is not really a clean solution, I think.
Okay, it seems I that I found the answer. Even with an IF statement, the SQL Server validates the entire query before executing it, so the way to overcome it, is to use a Dynamic SQL Query.
"SQL Server Dynamic SQL is a programming technique that allows you to construct SQL statements dynamically at runtime. It allows you to create more general purpose and flexible SQL statement because the full text of the SQL statements may be unknown at compilation."
This is how the query looks now. so instead of multiple IF statements, the query changes dynamically depending on the parameter.
DECLARE #SQL NVARCHAR(MAX)
SET #SQL = N'DROP TABLE IF EXISTS Table1;
SELECT [Field1]
,[Field2]
,[Field3]
,[Field4]
,[Field5]
,[Field6]
INTO Table1
FROM [LinkedServer]...[D' + CONVERT(nvarchar(3),#p1) + N']'
EXEC sp_executesql #SQL
Observe the following simple SQL code:
CREATE TABLE #tmp (...) -- Here comes the schema
INSERT INTO #tmp
EXEC(#Sql) -- The #Sql is a dynamic query generating result with a known schema
All is good, because we know the schema of the result produced by #Sql.
But what if the schema is unknown? In this case I use Powershell to generate a Sql query like that:
SET #Sql = '
SELECT *
INTO ##MySpecialAndUniquelyNamedGlobalTempTable
FROM ($Query) x
'
EXEC(#Sql)
(I omit some details, but the "spirit" of the code is preserved)
And it works fine, except that there is a severe limitation to what $Query can be - it must be a single SELECT statement.
This is not very good for me, I would like to be able to run any Sql script like that. The problem, is that no longer can I concatenate it to FROM (, it must be executed by EXEC or sp_executesql. But then I have no idea how to collect the results into a table, because I have no idea of the schema of that table.
Is it possible in Sql Server 2012?
Motivation: We have many QA databases across different Sql servers and more often than not I find myself running queries on all of them in order to locate the database most likely to yield best results for my tests. Alas, I am only able to run single SELECT statements, which is inconvenient.
We use SP and OPENROWSET for this purpose.
At first create SP based on a query you need, than use OPENROWSET to get data into temp table:
USE Test
DECLARE #sql nvarchar(max),
#query nvarchar(max)
SET #sql = N'Some query'
IF OBJECT_ID(N'SomeSPname') IS NOT NULL DROP PROCEDURE SomeSPname
SET #query =N'
CREATE PROCEDURE SomeSPname
AS
BEGIN
'+#sql+'
END'
EXEC sp_executesql #query
USE tempdb
IF OBJECT_ID(N'#temp') IS NOT NULL DROP TABLE #temp
SELECT *
INTO #temp
FROM OPENROWSET(
'SQLNCLI',
'Server=SERVER\INSTANCE;Database=Test;Trusted_Connection=yes;',
'EXEC dbo.SomeSPname')
SELECT *
FROM #temp
I have one instance of SSMS open and I am connected to one remote server as well as localhost. How can I get the names of all the servers that SSMS is currently connected to? The emblem of the remote server looks like
and the local looks like
Also, I would like to know if there's any problems with connecting to multiple servers from one instance of SSMS, and how to switch between servers through a script without clicking on a table name and doing something like select top 1000 rows
Okay there are multiple issues at work here as this is not always a simple answer. Depending on your environment and rights you may have one or more many permission groups that have access to one or many environments which have one or many servers that thus have access to one or many databases. However if you do have permission and you have linked servers set up with data access you can do something like this to get a listing of things you have access to. You could run this similarly on different environments making it into a procedure that you could call with ADO.NET or similar.
--declare variable for dynamic SQL
DECLARE
#SQL NVARCHAR(512)
, #x int
-- Create temp table to catch linked servers
Declare #Servers TABLE
(
Id int identity
, ServerName VARCHAR(128)
)
-- insert linked servers
insert into #Servers
select name
FROM sys.servers
-- remove temp table if it exists as it should not be prepopulated.
IF object_ID('tempdb..#Databases') IS NOT NULL
DROP TABLE tempdb..#Databases
;
-- Create temp table to catch built in sql stored procedure
CREATE TABLE #Databases --DECLARE #Procs table
(
ServerName varchar(64)
, DatabaseName VARCHAR(128)
)
SET #X = 1
-- Loops through the linked servers with matching criteria to examine how MANY there are. Do a while loop while they exist.
WHILE #X <= (SELECT count(*) FROM #Servers)
BEGIN
declare #DB varchar(128);
Select #DB = ServerName from #Servers where Id = #X -- get DB name from current cursor increment
-- Set up dynamic SQL but do not include master and other meta databases as no one cares about them.
SET #SQL = 'insert into #Databases select ''' + #Db + ''', name from ' + #DB + '.master.sys.databases
where name not in (''master'',''tempdb'',''model'',''msdb'')'
-- Execute the dynamic sql to insert into collection object
exec sp_executesql #SQL
-- increment for next iteration on next server
SET #X = #X + 1
END
;
SELECT *
FROM #Databases
I'm not entirely sure what you are asking. If you are asking if you can connect to multiple instances of SQL Server in a single query window the answer is yes. I went into detail on how and some of the implications here: Multiple instances, single query window
If on the other hand you are asking how to tell what instance you are connected to you can use ##SERVERNAME.
SELECT ##SERVERNAME
It will return the name of the instance you are connected to.
Typically you would connect to one instance per query window and flip between the windows to affect the specific instance you are interested in.
If you want to write a command to send you to a specific instance you can set your query window to SQLCMD mode (Query menu -> SQLCMD Mode) and use the :CONNECT command.
:CONNECT InstaneName
SELECT ##SERVERNAME
I am writing a stored procedure for SQL Server 2008 in which I need to extract information from a set of tables. I do not know ahead of time the structure of those tables. There is another table in the same database that tells me the names and types of the fields in this table.
I am doing this:
declare #sql nvarchar(max)
set #sql = 'select ... into #new_temporary_table ...'
exec sp_executesql #sql
Then I iterate doing:
set #sql = 'insert into #another_temporary_table ... select ... from #new_temporary_table'
exec sp_executesql #sql
After that I drop the temporary table. This happens in a loop, so the table with be created, populated and dropped many times, each time with different columns.
This fails with the error:
Invalid object name: #new_temporary_table.
After some googling I have found that:
The table #new_temporary_table is being created in the scope of the call to exec sp_executesql which is different from the one of my stored proc. This is the reason the next exec sp_executesql cannot find the table. This post explains it:
http://social.msdn.microsoft.com/forums/en-US/transactsql/thread/1dd6a408-4ac5-4193-9284-4fee8880d18a
I could use global temporary tables, which are prepended with ##. I can't do this because multiple stored procs could run at the same time and they would be affecting each other's state
In this article it says that if I find myself in this situation I should change the structure of the database. This is not an option for me:
http://www.sommarskog.se/dynamic_sql.html
One workaround I have found was combining all the select into #new_temporary_table.. and all the insert into ... scripts into one gigantic statement. This works fine but it has some downsides.
If I do print #sql to troubleshoot, the text gets truncated, for example.
Do I have any other option? All ideas are welcome.
You could use global temp tables, but use a context id (such as newid()) as part of the global temp table name.
declare #sql varchar(2000)
declare #contextid varchar(50) = convert(varchar(20), convert(bigint, substring(convert(binary(16), newid()), 1, 4)))
set #sql = 'select getdate() as stuff into ##new_temporary_table_' + #contextid
exec (#sql)
I think it's best to use one single script.
You can change how many characters will print in Tools > Options > Query Results > SQL Server > Results to Text - change "Maximum number of characters..." from 256 to the max (8192).
If it's bigger than 8192, then yes, printing is difficult. But you could try a different option in this case. Instead of PRINT #sql; instead use the following (with Results to Grid):
SELECT sql FROM (SELECT #sql) AS x(sql) FOR XML PATH;
Now you can click on the result, and it opens in a new query window. Well, it's an XML file window, and you can't execute it or see color-coding, and you have to ignore that it changes e.g. > to > to make it valid as XML data, but from here it's easy to eyeball if you're just trying to eyeball it. You can copy and paste it to a real query editor window and do a search and replace for the entitized characters if you like. FWIW I asked for them to make such XML windows real query windows, but this was denied:
http://connect.microsoft.com/SQLServer/feedback/details/425990/ssms-allow-same-semantics-for-xml-docs-as-query-windows
#temp tables (not global) are available in the scope they were created and below.
So you could do something like...
while (your_condition = 1) begin
set #sql = 'select ... into #temp1 ...from blah
exec sp_do_the_inserts'
exec(#sql)
end
The sp_do_the_inserts might look like...
select * into #temp2 from #temp1
....your special logic here....
This assumes you create sp_do_the_inserts beforehand, of course.
Don't know if that serves your need.
It looks like #temptables created using dynamic SQL via the EXECUTE string method have a different scope and can't be referenced by "fixed" SQLs in the same stored procedure.
However, I can reference a temp table created by a dynamic SQL statement in a subsequence dynamic SQL but it seems that a stored procedure does not return a query result to a calling client unless the SQL is fixed.
A simple 2 table scenario:
I have 2 tables. Let's call them Orders and Items. Order has a Primary key of OrderId and Items has a Primary Key of ItemId. Items.OrderId is the foreign key to identify the parent Order. An Order can have 1 to n Items.
I want to be able to provide a very flexible "query builder" type interface to the user to allow the user to select what Items he want to see. The filter criteria can be based on fields from the Items table and/or from the parent Order table. If an Item meets the filter condition including and condition on the parent Order if one exists, the Item should be return in the query as well as the parent Order.
Usually, I suppose, most people would construct a join between the Item table and the parent Order tables. I would like to perform 2 separate queries instead. One to return all of the qualifying Items and the other to return all of the distinct parent Orders. The reason is two fold and you may or may not agree.
The first reason is that I need to query all of the columns in the parent Order table and if I did a single query to join the Orders table to the Items table, I would be repoeating the Order information multiple times. Since there are typically a large number of items per Order, I'd like to avoid this because it would result in much more data being transfered to a fat client. Instead, as mentioned, I would like to return the two tables individually in a dataset and use the two tables within to populate a custom Order and child Items client objects. (I don't know enough about LINQ or Entity Framework yet. I build my objects by hand). The second reason I would like to return two tables instead of one is because I already have another procedure that returns all of the Items for a given OrderId along with the parent Order and I would like to use the same 2-table approach so that I could reuse the client code to populate my custom Order and Client objects from the 2 datatables returned.
What I was hoping to do was this:
Construct a dynamic SQL string on the Client which joins the orders table to the Items table and filters appropriate on each table as specified by the custom filter created on the Winform fat-client app. The SQL build on the client would have looked something like this:
TempSQL = "
INSERT INTO #ItemsToQuery
OrderId, ItemsId
FROM
Orders, Items
WHERE
Orders.OrderID = Items.OrderId AND
/* Some unpredictable Order filters go here */
AND
/* Some unpredictable Items filters go here */
"
Then, I would call a stored procedure,
CREATE PROCEDURE GetItemsAndOrders(#tempSql as text)
Execute (#tempSQL) --to create the #ItemsToQuery table
SELECT * FROM Items WHERE Items.ItemId IN (SELECT ItemId FROM #ItemsToQuery)
SELECT * FROM Orders WHERE Orders.OrderId IN (SELECT DISTINCT OrderId FROM #ItemsToQuery)
The problem with this approach is that #ItemsToQuery table, since it was created by dynamic SQL, is inaccessible from the following 2 static SQLs and if I change the static SQLs to dynamic, no results are passed back to the fat client.
3 around come to mind but I'm look for a better one:
1) The first SQL could be performed by executing the dynamically constructed SQL from the client. The results could then be passed as a table to a modified version of the above stored procedure. I am familiar with passing table data as XML. If I did this, the stored proc could then insert the data into a temporary table using a static SQL that, because it was created by dynamic SQL, could then be queried without issue. (I could also investigate into passing the new Table type param instead of XML.) However, I would like to avoid passing up potentially large lists to a stored procedure.
2) I could perform all the queries from the client.
The first would be something like this:
SELECT Items.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter)
SELECT Orders.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter)
This still provides me with the ability to reuse my client sided object-population code because the Orders and Items continue to be returned in two different tables.
I have a feeling to, that I might have some options using a Table data type within my stored proc, but that is also new to me and I would appreciate a little bit of spoon feeding on that one.
If you even scanned this far in what I wrote, I am surprised, but if so, I woul dappreciate any of your thoughts on how to accomplish this best.
You first need to create your table first then it will be available in the dynamic SQL.
This works:
CREATE TABLE #temp3 (id INT)
EXEC ('insert #temp3 values(1)')
SELECT *
FROM #temp3
This will not work:
EXEC (
'create table #temp2 (id int)
insert #temp2 values(1)'
)
SELECT *
FROM #temp2
In other words:
Create temp table
Execute proc
Select from temp table
Here is complete example:
CREATE PROC prTest2 #var VARCHAR(100)
AS
EXEC (#var)
GO
CREATE TABLE #temp (id INT)
EXEC prTest2 'insert #temp values(1)'
SELECT *
FROM #temp
1st Method - Enclose multiple statements in the same Dynamic SQL Call:
DECLARE #DynamicQuery NVARCHAR(MAX)
SET #DynamicQuery = 'Select * into #temp from (select * from tablename) alias
select * from #temp
drop table #temp'
EXEC sp_executesql #DynamicQuery
2nd Method - Use Global Temp Table:
(Careful, you need to take extra care of global variable.)
IF OBJECT_ID('tempdb..##temp2') IS NULL
BEGIN
EXEC (
'create table ##temp2 (id int)
insert ##temp2 values(1)'
)
SELECT *
FROM ##temp2
END
Don't forget to delete ##temp2 object manually once your done with it:
IF (OBJECT_ID('tempdb..##temp2') IS NOT NULL)
BEGIN
DROP Table ##temp2
END
Note: Don't use this method 2 if you don't know the full structure on database.
I had the same issue that #Muflix mentioned. When you don't know the columns being returned, or they are being generated dynamically, what I've done is create a global table with a unique id, then delete it when I'm done with it, this looks something like what's shown below:
DECLARE #DynamicSQL NVARCHAR(MAX)
DECLARE #DynamicTable VARCHAR(255) = 'DynamicTempTable_' + CONVERT(VARCHAR(36), NEWID())
DECLARE #DynamicColumns NVARCHAR(MAX)
--Get "#DynamicColumns", example: SET #DynamicColumns = '[Column1], [Column2]'
SET #DynamicSQL = 'SELECT ' + #DynamicColumns + ' INTO [##' + #DynamicTable + ']' +
' FROM [dbo].[TableXYZ]'
EXEC sp_executesql #DynamicSQL
SET #DynamicSQL = 'IF OBJECT_ID(''tempdb..##' + #DynamicTable + ''' , ''U'') IS NOT NULL ' +
' BEGIN DROP TABLE [##' + #DynamicTable + '] END'
EXEC sp_executesql #DynamicSQL
Certainly not the best solution, but this seems to work for me.
I would strongly suggest you have a read through http://www.sommarskog.se/arrays-in-sql-2005.html
Personally I like the approach of passing a comma delimited text list, then parsing it with text to table function and joining to it. The temp table approach can work if you create it first in the connection. But it feel a bit messier.
Result sets from dynamic SQL are returned to the client. I have done this quite a lot.
You're right about issues with sharing data through temp tables and variables and things like that between the SQL and the dynamic SQL it generates.
I think in trying to get your temp table working, you have probably got some things confused, because you can definitely get data from a SP which executes dynamic SQL:
USE SandBox
GO
CREATE PROCEDURE usp_DynTest(#table_type AS VARCHAR(255))
AS
BEGIN
DECLARE #sql AS VARCHAR(MAX) = 'SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = ''' + #table_type + ''''
EXEC (#sql)
END
GO
EXEC usp_DynTest 'BASE TABLE'
GO
EXEC usp_DynTest 'VIEW'
GO
DROP PROCEDURE usp_DynTest
GO
Also:
USE SandBox
GO
CREATE PROCEDURE usp_DynTest(#table_type AS VARCHAR(255))
AS
BEGIN
DECLARE #sql AS VARCHAR(MAX) = 'SELECT * INTO #temp FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = ''' + #table_type + '''; SELECT * FROM #temp;'
EXEC (#sql)
END
GO
EXEC usp_DynTest 'BASE TABLE'
GO
EXEC usp_DynTest 'VIEW'
GO
DROP PROCEDURE usp_DynTest
GO