Use Temp Table to merge query results from different DBs - sql-server

I need to extract data from different DBs into a single table. These DBs are all in the same Server and Instance and they have the same structure. One of the columns will be DB Name, the others come from the same table.
I could write a query that extracts these data with a table for each database, but I would like to merge all results into a single table.
I tried to use a temp table to save the single results but the result is an empty table. It seems that the table #tmpTable is emptied after each query. I post here my attempt:
CREATE TABLE [dbo].#tmpTable ([DbName] VARCHAR(MAX), [Example] VARCHAR(MAX))
EXECUTE sp_MSForEachDB
'USE ?;
DECLARE #ExampleQuery AS NVARCHAR(MAX) =
''SELECT DB_NAME() AS [DbName], [Example]
INTO #tmpTable
FROM [tConfig]''
EXEC sp_executesql #ExampleQuery;'
SELECT * FROM #tmpTable
DROP TABLE #tmpTable
The actual query is more complex and it's using PIVOT and other commands, but I think this example is enough if someone knows how to get the wanted result.

CREATE TABLE [dbo].#tmpTable ([DbName] VARCHAR(MAX))
EXECUTE sp_MSForEachDB
'USE ?;
DECLARE #ExampleQuery AS NVARCHAR(MAX) =
''INSERT INTO #tmpTable SELECT DB_NAME() AS [DbName] ''
EXEC sp_executesql #ExampleQuery;'
SELECT * FROM #tmpTable
DROP TABLE #tmpTable

Related

Passing variables into Openquery and SQL Injection

I have two databases (A and B), both SQL Server, on different servers. These databases are connected with a linked server.
I have to be able to insert rows with distinct values into a table in database B using a stored procedure on database A. This stored procedure uses OPENQUERY in order to do the INSERT statements into database B.
I know OPENQUERY does not accept variables for its arguments. OPENQUERY has specific syntax on how to do an insert into a linked DB:
INSERT OPENQUERY (OracleSvr, 'SELECT name FROM joe.titles')
VALUES ('NewTitle');
Nevertheless, the MS documentation shows a way to pass variables into a linked server query like this:
DECLARE #TSQL varchar(8000), #VAR char(2)
SELECT #VAR = 'CA'
SELECT #TSQL = 'SELECT * FROM OPENQUERY(MyLinkedServer,''SELECT * FROM pubs.dbo.authors WHERE state = ''''' + #VAR + ''''''')'
EXEC (#TSQL)
And here is the issue. Lets say the table in database B has two columns, ID (int) and VALUE (nvarchar(max))
Thus, for a stored procedure to be able to insert different values into a table in database B, my procedure looks like this:
CREATE PROCEDURE openquery_insert
#var1 int,
#var2 nvarchar(max)
AS
BEGIN
SET NOCOUNT ON;
BEGIN
DECLARE #SQL_string nvarchar(max)
SET #SQL_string = 'insert openquery(LINKEDSERVER, ''SELECT ID, VALUE from TABLE'') VALUES ('
+ CAST(#var1 AS NVARCHAR(5)) + ', '
+ '''' + CAST(#var2 AS NVARCHAR(max)) + ''''
+ ')'
EXEC sp_executesql #SQL_string
END
END
The procedure can be called as
EXEC openquery_insert #var1 = 1, #var2 = 'asdf'
But if #var2 were to be ' DROP TABLE B--, a SQL injection attack would be successful.
Is there a way in order to prevent SQL Injection with OPENQUERY?
I do not control what the values are for the arguments #var1 and #var2 when the procedure gets called
I am not able to create functions or stored procedures on database B
I have to use OPENQUERY, I can not use four part naming in order to do the insert
I have to use a stored procedure on DB A
Thanks!
The "hacky" way is to insert your arguments into a local table first and then do the INSERT INTO ... SELECT using OPENQUERY.
This is all straightforward if your SP is ever called by one process in a synchronous fashion: you can have one table where you insert values into then execute OPENQUERY to grab them and then you delete those values from the table.
If concurrency is a requirement then you have to write logic that creates uniquely named tables etc. which quickly becomes somewhat messy.

Why doesn't this alter after insert statement work?

I have a stored procedure with dynamic sql that i have embedded as below:
delete from #temp_table
begin tran
set #sql = 'select * into #temp_table from sometable'
exec (#sql)
commit tran
begin
set #sql = 'alter table #temp_table add column1 float'
exec(#sql)
end
update #temp_table
set column1 = column1*100
select *
into Primary_Table
from #temp_table
However, I noticed that all the statements work but the alter does not. When run the procedure, I get an error message: "Invalid Column name column1"
What am I doing wrong here?
EDIT: Realized I didn't mention that the first insert is a dynamic sql as well. Updated it.
Alternate approach tried but throws same error:
delete from #temp_table
begin tran
set #sql = 'select * into #temp_table from sometable'
exec (#sql)
commit tran
alter table #temp_table add column1 float
update #temp_table set column1 = column1*100
Local temporary tables exhibit something like dynamic scope. When you create a local temporary table inside a call to exec it goes out of scope and existence on the return from exec.
EXEC (N'create table #x (c int)')
GO
SELECT * FROM #x
Msg 208, Level 16, State 0, Line 4
Invalid object name '#x'.
The select is parsed after the dynamic SQL to create #x is ran. But #x is not there because dropped on exit from exec.
Update
Depending on the situation there are different ways to work around the issue.
Put everything into the same string:
DECLARE #Sql NVARCHAR(MAX) = N'SELECT 1 AS source INTO #table_name;
ALTER TABLE #table_name ADD TARGET float;
UPDATE #table_name SET Target = 100 * source;';
EXEC (#Sql);
Create the table ahead of the dynamic sql that populates it.
CREATE TABLE #table_name (source INT);
EXEC (N'insert into #table_name (source) select 1;');
ALTER TABLE #table_name ADD target FLOAT;
UPDATE #table_name SET target = 100 * source;
In this option, the alter table statement can be removed by adding the additional column to the create table statement.' Note also that the alter table and update statements could be in separate invocations of dynamic SQL, if that was beneficial to your context.
1) It should be ALTER TABLE #temp... Not ALTER #temp.
2) Even if #1 weren't an issue, you're adding column1, as a NULLable column with no default value and, in the next statement setting it's value to itself * 100...
NULL * 100 = NULL
3) Why are you using dynamic sql to alter the #temp table? It can just as easily be done with a regular ALTER TABLE script... or, better yet, can be included in the original table definition.
This is because the #temp_table reference in the outer batch is a different temp table than the one created in dynamic SQL. Consider:
use tempdb
drop table if exists sometable
drop table if exists #temp_table
go
create table sometable(id int, a int)
create table #temp_table(id int, b int)
exec( 'select * into #temp_table from sometable; select * from #temp_table;' )
select * from #temp_table
Outputs
id a
----------- -----------
(0 rows affected)
id b
----------- -----------
(0 rows affected)
A temp table created in a nested batch is scoped to the nested batch and automatically dropped after. A "nested batch" is either a dynamic SQL query or a stored procedure. This behavior is explained here CREATE TABLE, but it only mentions stored procedures. Dynamic SQL behaves the same.
If you create the temp table in a top level batch, you can access it in dynamic SQL, you just can't create a new temp table in dynamic SQL and see it in the outer batch or in subsequent same-level dynamic SQL. So try to use INSERT INTO instead of SELECT INTO.

T-SQL : how to use exec to insert into table not previously created?

I have the following code:
Declare #strSQL varchar(max);
set #strSQL = N'REALLY LONG QUERY';
exec (#strSQL) at <LinkedServerName>;
I did this because the query is longer than 8000 characters (it cannot be changed, it just has too many columns). It works, but I need to insert it into a temporal table that does not yet exist. So, I do not want to run the create table before hand. So, where should I write INTO tmp_table for correct syntax?
For example, this does not work:
exec (#strSQL) INTO tmp_table at <LinkedServerName>;
Declare #strSQL varchar(max);
if object_id('tempdb..MyTempTable') is not null drop table tempdb.dbo.MyTempTable
set #strSQL = N'select * into tempdb.dbo.MyTempTable from (REALLY LONG QUERY) k';
exec (#strSQL) at <LinkedServerName>;
-- for check of existence of table for linkedserver add it into #strSQL
or
replace tempdb.dbo.MyTempTable with temp table ##Table (for local server)
or
decompose and normalize your model, if you have qry with thousands chars you don't need dynamical qry but rethink your solution - for example use views or pivot your output

How to insert into table the results of a dynamic query when the schema of the result is unknown a priori?

Observe the following simple SQL code:
CREATE TABLE #tmp (...) -- Here comes the schema
INSERT INTO #tmp
EXEC(#Sql) -- The #Sql is a dynamic query generating result with a known schema
All is good, because we know the schema of the result produced by #Sql.
But what if the schema is unknown? In this case I use Powershell to generate a Sql query like that:
SET #Sql = '
SELECT *
INTO ##MySpecialAndUniquelyNamedGlobalTempTable
FROM ($Query) x
'
EXEC(#Sql)
(I omit some details, but the "spirit" of the code is preserved)
And it works fine, except that there is a severe limitation to what $Query can be - it must be a single SELECT statement.
This is not very good for me, I would like to be able to run any Sql script like that. The problem, is that no longer can I concatenate it to FROM (, it must be executed by EXEC or sp_executesql. But then I have no idea how to collect the results into a table, because I have no idea of the schema of that table.
Is it possible in Sql Server 2012?
Motivation: We have many QA databases across different Sql servers and more often than not I find myself running queries on all of them in order to locate the database most likely to yield best results for my tests. Alas, I am only able to run single SELECT statements, which is inconvenient.
We use SP and OPENROWSET for this purpose.
At first create SP based on a query you need, than use OPENROWSET to get data into temp table:
USE Test
DECLARE #sql nvarchar(max),
#query nvarchar(max)
SET #sql = N'Some query'
IF OBJECT_ID(N'SomeSPname') IS NOT NULL DROP PROCEDURE SomeSPname
SET #query =N'
CREATE PROCEDURE SomeSPname
AS
BEGIN
'+#sql+'
END'
EXEC sp_executesql #query
USE tempdb
IF OBJECT_ID(N'#temp') IS NOT NULL DROP TABLE #temp
SELECT *
INTO #temp
FROM OPENROWSET(
'SQLNCLI',
'Server=SERVER\INSTANCE;Database=Test;Trusted_Connection=yes;',
'EXEC dbo.SomeSPname')
SELECT *
FROM #temp

SQL Server : update records in dynamically generated tables using parameters in stored procedure

I have to create a stored procedure where I will pass tableName, columnName, id as parameters. The task is to select records from the passed table where columnName has passed id. If record is found update records with some fixed data. Also implement Transaction so that we can rollback in case of any error.
There are hundreds of table in database and each table has different schema that is why I have to pass columnName.
Don't know what is the best approach for this. I am trying select records into a temp table so that I can manipulate it as per requirement but its not working.
I am using this code:
ALTER PROCEDURE [dbo].[GetRecordsFromTable]
#tblName nvarchar(128),
#keyCol varchar(100),
#key int = 0
AS
BEGIN
SET NOCOUNT ON;
BEGIN TRY
--DROP TABLE #TempTable;
DECLARE #sqlQuery nvarchar(4000);
SET #sqlQuery = 'SELECT * FROM ' + #tblName + ' WHERE ' + #keyCol + ' = 2';
PRINT #sqlQuery;
INSERT INTO #TempTable
EXEC sp_executesql #sqlQuery,
N'#keyCol varchar(100), #key int', #keyCol, #key;
SELECT * FROM #TempTable;
END TRY
BEGIN CATCH
EXECUTE [dbo].[uspPrintError];
END CATCH;
END
I get an error
Invalid object name '#TempTable'
Also not sure if this is the best approach to get data and then update it.
If you absolutely must make that work then I think you'll have to use a global temp table. You'll need to see if it exists before running your dynamic sql and clean up. With a fixed table name you'll run into problems with other connections. Inside the dynamic sql you'll add select * into ##temptable from .... Actually I'm not even sure why you want the temp table in the first place. Can't the dynamic sql just return the results?
On the surface it seems like a solid idea to have one generic procedure for returning data with a couple of parameters to drive it but, without a lot of explanation, it's just not the way database are designed to work.
You should create the temp table.
IF OBJECT_ID('tempdb..##TempTable') IS NOT NULL
DROP TABLE ##TempTable
CREATE TABLE ##TempTable()

Resources