I have Scenario and i want to convert it to a query.
My Scenario:
I have lot of dbs with same structure "Clientxxxx" i want to make loop for all these dbs to get data from one table exists in all these dbs called "EventLog" these event log recorded in this table for clients exists in another db called "Portal"
I want to get every client in "portal" db with his eventlogs from "EventLog" Table in the other dbs "Clientxxxx"
db:Client1 db:Client2 db:Client3
table:"EventLog" table:"EventLog" table:"EventLog"
each client has his db and his data in Portal db
db:portal
table:Clients
query:
Client1 data
his event logs
client2 data
his event logs
and so on
........
........
........
........
I need some help please.
thanks
I would do the following:
Create a view in your Portal db that has this in it:
vw_AggregateClients:
SELECT 'Client1' as clientName, * from Client1.dbo.EventLog
UNION
SELECT 'Client2', * from Client2.dbo.EventLog
UNION
SELECT 'Client3', * from Client3.dbo.EventLog
And then query it like this:
SELECT * from vw_AggregateClients as ac
INNER JOIN Clients as c
ON ac.clientName = c.ClientName
If you the number of client dbs will be large or you don't know how many there will be then you will probably have to use dynamic sql. If you go that route give the article i linked to a good read.
Typically, I use the dynamic SQL approach, with a cursor to loop through all the databases, insert into a consolidated table variable, and then select out of the variable:
declare #return (dbname varchar(100),
<<dataset spec for EventLog>>)
declare #db varchar(100)
declare #sql varchar(max)
declare recscan cursor for
select name from sys.databases
where database_id not in (1,2,3,4) --excludes system databases
open recscan
fetch next from recscan into #db
while ##fetch_status = 0
begin
set #sql = 'select '''+#db+''',* from '+#db+'..eventlog'
insert into #return
exec(#sql)
fetch next from recscan into #db
end
close recscan
deallocate recscan
select * from #return
Note that I create an extra field and put the database name as an explicit string value in the dynamic query so that I can break out where the data came from. I also use 3 part naming for the table object, but you could insert a dynamically constructed USE statement into your SQL variable.
Related
It is possible in SSMS to run the same query on some databases?
I was thinking something like an array with database names and cycling through it, maybe with SQLCMD mode.
some pseudocode:
:setvar arr ["db1", "db2", "db3"]
foreach $db in $arr
:setvar database $db
use $(database)
go
select * from table
Thanks
-- To Achive your Desire OutpUt You Have to Use Dynamic Query
-- you can Achieve this in TSQL
-- TO Know The Database ID Run Below Query
/*
SELECT * FROM Sys.databases WHERE database_id >4
*/
USE MASTER
GO
BEGIN TRAN
DECLARE #strt INT,#End INT,#Database NVARCHAR(255)
SELECT * INTO #T FROM Sys.databases WHERE database_id IN (4,5,6)-- Here you Have to Defined the Database ID
ORDER BY 1
SELECT ROW_NUMBER ()OVER (ORDER BY database_Id)Db_Id,* INTO #TT FROM #T
SET #strt=1
SELECT #End=Max(Db_ID)FROM #tt
WHILE #strt<=#END
BEGIN
DECLARE #string NVARCHAR(MAX)
SELECT #Database=NAME FROM #TT WHERE Db_ID=#strt
Set #string=' Select * from '+#Database+'..Table_Name'
SET #strt=#strt+1
PRINT #string
EXEC(#string)
END
ROLLBACK TRAN
Here's an example using two of my databases (Staging and Warehouse) to hit the sys.columns table in each. Just change out the IN filter with the names of whichever databases you want, and the ".sys.columns" with the schema/table name you need.
DECLARE #query NVARCHAR(MAX) = ''
;
SELECT #query += CONCAT('SELECT * FROM ',[name],'.sys.columns;')
FROM sys.databases
WHERE [name] IN ('Staging','Warehouse')
;
EXEC sp_executesql #query;
You can do this with a Local Server Group as well.
View --> Registered Servers
Right-click Local Server Group and create new
Add new server registrations to the group (any databases you want to query). Be sure to specify the database in the Connection Properties tab
Now you can right-click the Local Server Group folder and execute a new query, which will run on all the databases in that folder.
I am very new to SSIS and I need to write a package that will move data from transactional databases to a master database. We have a transactional database per plant and the schema for all of these is identical. I need to go through each table in each database and copy all the data that hasn't been marked as exported to its corresponding table in the master database. After the records are successfully copied to the master database they should be marked as exported in the transactional database.
So far I've gotten my SSIS package to where I can iterate through the plant databases and read from one of the tables. I'm currently storing the resuls from that table into a variable. I accomplished the iteration part by using an expression in the For Each Loop Container's Connection Manager that sets the Initial Catalog to the current database name in the loop.
However, I'm not sure how to proceed after that. Here's a picture of my package's current state:
I've tried creating another Execute SQL Task that takes the results from Get New Apples and copies them to the master database. However, from what I've googled so far there doesn't seem to be an easy way to accomplish this.
A different approach I've tried is to create an OLE DB Source using the same connection manager as the For Each Loop Container. When I do that I get an error saying that the Apple table is not a valid object(My query being select * from Apple where exported = 0;).
Any suggestions as to how I can read a result set from a variable or get the OLE DB Source to work with the aforementioned Connection Manager would be very helpful.
I'm also open to alternate methods to accomplishing this. Like I said, I'm new to SSIS and am still feeling my way around it.
Originally I tried to make this as a stored procedure but it started to grow unmanagable and ugly very quickly:
SELECT *
INTO #tempapple
FROM (SELECT *
FROM [Plant1].[dbo].[Apple]
WHERE exported = 0
UNION
SELECT *
FROM [Plant2].[dbo].[Apple]
WHERE exported = 0) AS x;
INSERT INTO [Master].[dbo].[Apple]
SELECT id,
NAME,
description,
active,
plant
FROM #tempapple
WHERE id NOT IN (SELECT id
FROM [Master].[dbo].[Apple]);
UPDATE [Plant1].[dbo].[Apple]
SET exported = 1
WHERE id IN (SELECT id
FROM #tempapple);
UPDATE [Plant2].[dbo].[Apple]
SET exported = 1
WHERE id IN (SELECT id
FROM #tempapple);
DROP TABLE #tempapple;
I've got to make a few assumptions here:
The variable is type 'Object'
the foreach loop is on an ADO.Object enumerator setting the db name to a variable
insert an expression before the dataflow
in the expression set a new variable type string to "Select * from " + [dbname] + ".[schema].[tablename] where exported = 0"
4a. Note that dbname comes the enumerable set in #2
In your dataflow, set your source to variable and use that variable in #4.
This should get your data at least loaded.
You have options, for updated the isExported column in the source.
I'm writing this directly so you may need to modify it slightly.
declare #dbname as varchar(100) -- dbname
declare #SQL varchar(max)
declare db_cursor cursor for
[ this is where you insert your code for getting DBnames]
OPEN db_cursor
fetch next from db_cursor into #dbname
while ##fetch_status = 0
BEGIN
set #SQL = "Select * into #temptable from " + #dbname + ".[dbo].[Apple] where exported = 0
INSERT INTO [Master].[dbo].[Apple]
SELECT id,
NAME,
description,
active,
plant
FROM #tempapple
-- no where clause needed
UPDATE " + #dbname + ".[dbo].[Apple]
SET exported = 1
from " + #dbname + ".[dbo].[Apple] a
join #temptable tt on a.id=tt.id
DROP TABLE #tempapple; "
exec(#sql);
fetch next from db_cursor into #dbname
END
close db_cursor
deallocate db_cursor
I've decided to settle for a mix of my two approaches. The SSIS package remains mostly the same with the logic to iterate through each plant database. Within the loop I now have several Execute SQL Tasks to import data from the various tables. The logic for the import apples task looks something like this:
SELECT *
INTO #tempapple
FROM (SELECT *
FROM apple
WHERE exported = 0);
INSERT INTO [Master].[dbo].[apple]
SELECT id,
NAME,
description,
active,
plant
FROM #tempapple
WHERE id NOT IN (SELECT id
FROM [Master].[dbo].[apple]);
UPDATE apple
SET exported = 1
WHERE id IN (SELECT id
FROM #tempapple);
DROP TABLE #tempapple;
This allows me to not have reduntant SQL since each task will be executed once per plant database.
Assume that I have a table on my local which is Local_Table and I have another server and another db and table, which is Remote_Table (table structures are the same).
Local_Table has data, Remote_Table doesn't. I want to transfer data from Local_Table to Remote_Table with this query:
Insert into RemoteServer.RemoteDb..Remote_Table
select * from Local_Table (nolock)
But the performance is quite slow.
However, when I use SQL Server import-export wizard, transfer is really fast.
What am I doing wrong? Why is it fast with Import-Export wizard and slow with insert-select statement? Any ideas?
The fastest way is to pull the data rather than push it. When the tables are pushed, every row requires a connection, an insert, and a disconnect.
If you can't pull the data, because you have a one way trust relationship between the servers, the work around is to construct the entire table as a giant T-SQL statement and run it all at once.
DECLARE #xml XML
SET #xml = (
SELECT 'insert Remote_Table values (' + '''' + isnull(first_col, 'NULL') + ''',' +
-- repeat for each col
'''' + isnull(last_col, 'NULL') + '''' + ');'
FROM Local_Table
FOR XML path('')
) --This concatenates all the rows into a single xml object, the empty path keeps it from having <colname> </colname> wrapped arround each value
DECLARE #sql AS VARCHAR(max)
SET #sql = 'set nocount on;' + cast(#xml AS VARCHAR(max)) + 'set nocount off;' --Converts XML back to a long string
EXEC ('use RemoteDb;' + #sql) AT RemoteServer
It seems like it's much faster to pull data from a linked server than to push data to a linked server: Which one is more efficient: select from linked server or insert into linked server?
Update: My own, recent experience confirms this. Pull if possible -- it will be much, much faster.
Try this on the other server:
INSERT INTO Local_Table
SELECT * FROM RemoteServer.RemoteDb.Remote_Table
The Import/Export wizard will be essentially doing this as a bulk insert, where as your code is not.
Assuming that you have a Clustered Index on the remote table, make sure that you have the same Clustered index on the local table, set Trace flag 610 globally on your remote server and make sure remote is in Simple or bulk logged recovery mode.
If you're remote table is a Heap (which will speed things up anyway), make sure your remote database is in simple or bulk logged mode change your code to read as follows:
INSERT INTO RemoteServer.RemoteDb..Remote_Table WITH(TABLOCK)
SELECT * FROM Local_Table WITH (nolock)
The reason why it's so slow to insert into the remote table from the local table is because it inserts a row, checks that it inserted, and then inserts the next row, checks that it inserted, etc.
Don't know if you figured this out or not, but here's how I solved this problem using linked servers.
First, I have a LocalDB.dbo.Table with several columns:
IDColumn (int, PK, Auto Increment)
TextColumn (varchar(30))
IntColumn (int)
And I have a RemoteDB.dbo.Table that is almost the same:
IDColumn (int)
TextColumn (varchar(30))
IntColumn (int)
The main difference is that remote IDColumn isn't set up as as an ID column, so that I can do inserts into it.
Then I set up a trigger on remote table that happens on Delete
Create Trigger Table_Del
On Table
After Delete
AS
Begin
Set NOCOUNT ON;
Insert Into Table (IDColumn, TextColumn, IntColumn)
Select IDColumn, TextColumn, IntColumn from MainServer.LocalDB.dbo.table L
Where not exists (Select * from Table R WHere L.IDColumn = R.IDColumn)
END
Then when I want to do an insert, I do it like this from the local server:
Insert Into LocalDB.dbo.Table (TextColumn, IntColumn) Values ('textvalue', 123);
Delete From RemoteServer.RemoteDB.dbo.Table Where IDColumn = 0;
--And if I want to clean the table out and make sure it has all the most up to date data:
Delete From RemoteServer.RemoteDB.dbo.Table
By triggering the remote server to pull the data from the local server and then do the insert, I was able to turn a job that took 30 minutes to insert 1258 lines into a job that took 8 seconds to do the same insert.
This does require a linked server connection on both sides, but after that's set up it works pretty good.
Update:
So in the last few years I've made some changes, and have moved away from the delete trigger as a way to sync the remote table.
Instead I have a stored procedure on the remote server that has all the steps to pull the data from the local server:
CREATE PROCEDURE [dbo].[UpdateTable]
-- Add the parameters for the stored procedure here
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for procedure here
--Fill Temp table
Insert Into WebFileNamesTemp Select * From MAINSERVER.LocalDB.dbo.WebFileNames
--Fill normal table from temp table
Delete From WebFileNames
Insert Into WebFileNames Select * From WebFileNamesTemp
--empty temp table
Delete From WebFileNamesTemp
END
And on the local server I have a scheduled job that does some processing on the local tables, and then triggers the update through the stored procedure:
EXEC sp_serveroption #server='REMOTESERVER', #optname='rpc', #optvalue='true'
EXEC sp_serveroption #server='REMOTESERVER', #optname='rpc out', #optvalue='true'
EXEC REMOTESERVER.RemoteDB.dbo.UpdateTable
EXEC sp_serveroption #server='REMOTESERVER', #optname='rpc', #optvalue='false'
EXEC sp_serveroption #server='REMOTESERVER', #optname='rpc out', #optvalue='false'
If you must push data from the source to the target (e.g., for firewall or other permissions reasons), you can do the following:
In the source database, convert the recordset to a single XML string (i.e., multiple rows and columns combined into a single XML string).
Then push that XML over as a single row (as a varchar(max), since XML isn't allowed over linked databases in SQL Server).
DECLARE #xml XML
SET #xml = (select * from SourceTable FOR XML path('row'))
Insert into TempTargetTable values (cast(#xml AS VARCHAR(max)))
In the target database, cast the varchar(max) as XML and then use XML parsing to turn that single row and column back into a normal recordset.
DECLARE #X XML = (select '<toplevel>' + ImportString + '</toplevel>' from TempTargetTable)
DECLARE #iX INT
EXEC sp_xml_preparedocument #ix output, #x
insert into TargetTable
SELECT [col1],
[col2]
FROM OPENXML(#iX, '//row', 2)
WITH ([col1] [int],
[col2] [varchar](128)
)
EXEC sp_xml_removedocument #iX
I've found a workaround. Since I'm not a big fun of GUI tools like SSIS, I've reused a bcp script to load table into csv and vice versa. Yeah, it's an odd case to have the bulk operation support for files, but tables. Feel free to edit the following script to fit your needs:
exec xp_cmdshell 'bcp "select * from YourLocalTable" queryout C:\CSVFolder\Load.csv -w -T -S .'
exec xp_cmdshell 'bcp YourAzureDBName.dbo.YourAzureTable in C:\CSVFolder\Load.csv -S yourdb.database.windows.net -U youruser#yourdb.database.windows.net -P yourpass -q -w'
Pros:
No need to define table structures every time.
I've tested and it worked way faster than inserting directly through
the LinkedServer.
It's easier to manage than XML (which is limited to
varchar(max) length anyway).
No need of an extra layout of abstraction (tools like SSIS).
Cons:
Using the external tool bcp through the xp_cmdshell interface.
Table properties will be lost after ex/im-poring csv (i.e. datatype, nulls,length, separator within value, etc).
I have many security accounts on the sql database and i want to remove/add roles to them based on a simple string comparison.
Basically i want to list all
accounts
Filter out accounts that DON'T start
with "MyDomain\"
Remove role A.
Add role B.
What i found out by now is that i use sp_helprolemember to list all the accounts and sp_addrolemember and sp_droprolemember. My problem is that i dont know how to "get" the output from sp_helprolemember and work with it.
My first attemt at a soltuion based of feedback.
DROP TABLE [dbo].[XTemp]
create table XTemp(DbRole sysname,MemberName sysname,MemberSID varbinary(85) )
insert XTemp exec sp_helprolemember
select * from XTemp
I made a permanent table to make it simpler to test and debug.
SELECT [DbRole]
,[MemberName]
,[MemberSID]
FROM [ARTICLE].[dbo].[XTemp]
WHERE MemberName like Domain\%'
exec sp_addrolemember 'OldRole MemberName
Assuming that you're using SQL 2005 or later, and executing sp_helprolemember without parameters, this is the query that sp_helprolemember runs (extracted using sp_helptext):
select DbRole = g.name, MemberName = u.name, MemberSID = u.sid
from sys.database_principals u, sys.database_principals g, sys.database_role_members m
where g.principal_id = m.role_principal_id
and u.principal_id = m.member_principal_id
order by 1, 2
This should enable you to collect the information you need into a temp table.
If you'd rather stick to documented behaviour, you can store the output of the SP into a temp table:
create table #t
(DbRole sysname,
MemberName sysname,
MemberSID varbinary(85)
)
insert #t
exec sp_helprolemember
select * from #t
EDIT
There are two ways to use this data to amend your system. One is using a cursor:
DECLARE #memberName sysname
DECLARE curMember CURSOR fast_forward FOR
SELECT MemberName
FROM #t
WHERE MemberName LIKE 'Domain\%'
OPEN curMember
FETCH NEXT FROM curMember INTO #memberName
WHILE ##FETCH_STATUS = 0
BEGIN
EXEC sp_addrolemember 'OldRole', #memberName
FETCH NEXT FROM curMember INTO #memberName
END
CLOSE curMember
DEALLOCATE curMember
The other is using dynamic SQL:
DECLARE #sql NVARCHAR(MAX),
SELECT #sql = 'EXEC sp_addrolemember ''OldRole'', ''' + MemberName + ''''
FROM #t
WHERE MemberName LIKE 'Domain\%'
EXEC sp_executesql #stmt = #sql
As you can see the dynamic SQL version is more compact but requires more effort to maintain.
Remember that after you execute either statement, the data you extracted from sp_helprolemember into a table is no longer up to date, and should probably be refreshed.
You can use Excel to generate SQL queries - I know it sounds lame but it is very simple and powerful. It is especially well-suited for tasks that have to be performed once or only from time to time.
Copy results from Management Studio to Excel.
Remove rows and columns than you don't need.
Use a formula in column B (e.g. ="EXEC sp_dropsrvrolemember '"&A1&"', 'sysadmin'") to generate queries for values stored in column A (the formula can of course reference more than one column with input data and generate really complicated queries).
Copy generated queries from Excel to Management Studio.
It looks like #temptables created using dynamic SQL via the EXECUTE string method have a different scope and can't be referenced by "fixed" SQLs in the same stored procedure.
However, I can reference a temp table created by a dynamic SQL statement in a subsequence dynamic SQL but it seems that a stored procedure does not return a query result to a calling client unless the SQL is fixed.
A simple 2 table scenario:
I have 2 tables. Let's call them Orders and Items. Order has a Primary key of OrderId and Items has a Primary Key of ItemId. Items.OrderId is the foreign key to identify the parent Order. An Order can have 1 to n Items.
I want to be able to provide a very flexible "query builder" type interface to the user to allow the user to select what Items he want to see. The filter criteria can be based on fields from the Items table and/or from the parent Order table. If an Item meets the filter condition including and condition on the parent Order if one exists, the Item should be return in the query as well as the parent Order.
Usually, I suppose, most people would construct a join between the Item table and the parent Order tables. I would like to perform 2 separate queries instead. One to return all of the qualifying Items and the other to return all of the distinct parent Orders. The reason is two fold and you may or may not agree.
The first reason is that I need to query all of the columns in the parent Order table and if I did a single query to join the Orders table to the Items table, I would be repoeating the Order information multiple times. Since there are typically a large number of items per Order, I'd like to avoid this because it would result in much more data being transfered to a fat client. Instead, as mentioned, I would like to return the two tables individually in a dataset and use the two tables within to populate a custom Order and child Items client objects. (I don't know enough about LINQ or Entity Framework yet. I build my objects by hand). The second reason I would like to return two tables instead of one is because I already have another procedure that returns all of the Items for a given OrderId along with the parent Order and I would like to use the same 2-table approach so that I could reuse the client code to populate my custom Order and Client objects from the 2 datatables returned.
What I was hoping to do was this:
Construct a dynamic SQL string on the Client which joins the orders table to the Items table and filters appropriate on each table as specified by the custom filter created on the Winform fat-client app. The SQL build on the client would have looked something like this:
TempSQL = "
INSERT INTO #ItemsToQuery
OrderId, ItemsId
FROM
Orders, Items
WHERE
Orders.OrderID = Items.OrderId AND
/* Some unpredictable Order filters go here */
AND
/* Some unpredictable Items filters go here */
"
Then, I would call a stored procedure,
CREATE PROCEDURE GetItemsAndOrders(#tempSql as text)
Execute (#tempSQL) --to create the #ItemsToQuery table
SELECT * FROM Items WHERE Items.ItemId IN (SELECT ItemId FROM #ItemsToQuery)
SELECT * FROM Orders WHERE Orders.OrderId IN (SELECT DISTINCT OrderId FROM #ItemsToQuery)
The problem with this approach is that #ItemsToQuery table, since it was created by dynamic SQL, is inaccessible from the following 2 static SQLs and if I change the static SQLs to dynamic, no results are passed back to the fat client.
3 around come to mind but I'm look for a better one:
1) The first SQL could be performed by executing the dynamically constructed SQL from the client. The results could then be passed as a table to a modified version of the above stored procedure. I am familiar with passing table data as XML. If I did this, the stored proc could then insert the data into a temporary table using a static SQL that, because it was created by dynamic SQL, could then be queried without issue. (I could also investigate into passing the new Table type param instead of XML.) However, I would like to avoid passing up potentially large lists to a stored procedure.
2) I could perform all the queries from the client.
The first would be something like this:
SELECT Items.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter)
SELECT Orders.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter)
This still provides me with the ability to reuse my client sided object-population code because the Orders and Items continue to be returned in two different tables.
I have a feeling to, that I might have some options using a Table data type within my stored proc, but that is also new to me and I would appreciate a little bit of spoon feeding on that one.
If you even scanned this far in what I wrote, I am surprised, but if so, I woul dappreciate any of your thoughts on how to accomplish this best.
You first need to create your table first then it will be available in the dynamic SQL.
This works:
CREATE TABLE #temp3 (id INT)
EXEC ('insert #temp3 values(1)')
SELECT *
FROM #temp3
This will not work:
EXEC (
'create table #temp2 (id int)
insert #temp2 values(1)'
)
SELECT *
FROM #temp2
In other words:
Create temp table
Execute proc
Select from temp table
Here is complete example:
CREATE PROC prTest2 #var VARCHAR(100)
AS
EXEC (#var)
GO
CREATE TABLE #temp (id INT)
EXEC prTest2 'insert #temp values(1)'
SELECT *
FROM #temp
1st Method - Enclose multiple statements in the same Dynamic SQL Call:
DECLARE #DynamicQuery NVARCHAR(MAX)
SET #DynamicQuery = 'Select * into #temp from (select * from tablename) alias
select * from #temp
drop table #temp'
EXEC sp_executesql #DynamicQuery
2nd Method - Use Global Temp Table:
(Careful, you need to take extra care of global variable.)
IF OBJECT_ID('tempdb..##temp2') IS NULL
BEGIN
EXEC (
'create table ##temp2 (id int)
insert ##temp2 values(1)'
)
SELECT *
FROM ##temp2
END
Don't forget to delete ##temp2 object manually once your done with it:
IF (OBJECT_ID('tempdb..##temp2') IS NOT NULL)
BEGIN
DROP Table ##temp2
END
Note: Don't use this method 2 if you don't know the full structure on database.
I had the same issue that #Muflix mentioned. When you don't know the columns being returned, or they are being generated dynamically, what I've done is create a global table with a unique id, then delete it when I'm done with it, this looks something like what's shown below:
DECLARE #DynamicSQL NVARCHAR(MAX)
DECLARE #DynamicTable VARCHAR(255) = 'DynamicTempTable_' + CONVERT(VARCHAR(36), NEWID())
DECLARE #DynamicColumns NVARCHAR(MAX)
--Get "#DynamicColumns", example: SET #DynamicColumns = '[Column1], [Column2]'
SET #DynamicSQL = 'SELECT ' + #DynamicColumns + ' INTO [##' + #DynamicTable + ']' +
' FROM [dbo].[TableXYZ]'
EXEC sp_executesql #DynamicSQL
SET #DynamicSQL = 'IF OBJECT_ID(''tempdb..##' + #DynamicTable + ''' , ''U'') IS NOT NULL ' +
' BEGIN DROP TABLE [##' + #DynamicTable + '] END'
EXEC sp_executesql #DynamicSQL
Certainly not the best solution, but this seems to work for me.
I would strongly suggest you have a read through http://www.sommarskog.se/arrays-in-sql-2005.html
Personally I like the approach of passing a comma delimited text list, then parsing it with text to table function and joining to it. The temp table approach can work if you create it first in the connection. But it feel a bit messier.
Result sets from dynamic SQL are returned to the client. I have done this quite a lot.
You're right about issues with sharing data through temp tables and variables and things like that between the SQL and the dynamic SQL it generates.
I think in trying to get your temp table working, you have probably got some things confused, because you can definitely get data from a SP which executes dynamic SQL:
USE SandBox
GO
CREATE PROCEDURE usp_DynTest(#table_type AS VARCHAR(255))
AS
BEGIN
DECLARE #sql AS VARCHAR(MAX) = 'SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = ''' + #table_type + ''''
EXEC (#sql)
END
GO
EXEC usp_DynTest 'BASE TABLE'
GO
EXEC usp_DynTest 'VIEW'
GO
DROP PROCEDURE usp_DynTest
GO
Also:
USE SandBox
GO
CREATE PROCEDURE usp_DynTest(#table_type AS VARCHAR(255))
AS
BEGIN
DECLARE #sql AS VARCHAR(MAX) = 'SELECT * INTO #temp FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = ''' + #table_type + '''; SELECT * FROM #temp;'
EXEC (#sql)
END
GO
EXEC usp_DynTest 'BASE TABLE'
GO
EXEC usp_DynTest 'VIEW'
GO
DROP PROCEDURE usp_DynTest
GO