Dynamic equivalent of INSERT INTO using EXEC - sql-server

I am trying to create the equivalent of the following:
DROP TABLE IF EXISTS [jerry].[dbo].[purchases]
SELECT * INTO [jerry].[dbo].[purchases] FROM OPENQUERY(OLAP, '
sql code
');
usingEXEC(see this question here)
With that being said, I am unable to use SELECT * INTO according to multiple sources (and this one)
I have found some other resources where I can create a new table using EXEC, however, I do not know the exact structure of the resulting table (number of columns, column types, column names, etc) so it needs to be dynamic.
The following code results in giving me the exact resulting table that I want, but I have not been able to figure out how to create the purchases table with the resulting data from the below query:
-- EXEC master.dbo.sp_serveroption #server=N'OLAP', #optname=N'rpc out', #optvalue=N'true'
DECLARE #sqlcode VARCHAR(MAX)
SET #sqlcode = 'code'
EXEC (#sqlcode) AT OLAP
I have tried using the following: SELECT * INTO [jerry].[dbo].[purchases] FROM OPENROWSET('EXEC (#sqlcode) AT OLAP') but get an error of Incorrect syntax near ')'.
I have also tried (just to see):
CREATE TABLE [jerry].[dbo].[purchases] ([Transaction_Date] DATE, [Requirement_Date] DATE, [Element] NVARCHAR(256), [Trx_Quantity] NVARCHAR(256), [Part_Number] NVARCHAR(256), [NHA_Part_Number] NVARCHAR(256), [Group] NVARCHAR(256), [Details] NVARCHAR(256));
INSERT INTO [jerry].[dbo].[purchases]
EXEC (#sqlcode) AT OLAP
And get an:
OLE DB provider "OraOLEDB.Oracle" for linked server "OLAP" returned message "Unable to enlist in the transaction.".
Msg 7391, Level 16, State 2, Line 208
The operation could not be performed because OLE DB provider "OraOLEDB.Oracle" for linked server "OLAP" was unable to begin a distributed transaction.
error.
Apologies if this is an easy logic question -- I feel like I am exhausting my researching ability trying to find a solution as I am very new to SQL Server. I am working in SSMS 2017, as well, if that helps.

The answer can be found at this post:
Create new table with results from EXEC
DECLARE #sqlcode VARCHAR(MAX)
SET #sqlcode = 'sqlcode'
truncate table [jerry].[dbo].[purchases]
insert into [jerry].[dbo].[purchases]
exec ( #sqlcode ) at OLAP

Related

Why is the table inside a non-met IF being validated before condition is met, resulting in error if table does not exist?

I am trying to execute a procedure with a parameter, and depending on the value of the parameter, three different IF conditions will be evaluated to verify which query it will execute from a linked server.
But when I execute the query, it seems to be checking if the tables inside all the IF exists before starting the query. And I know that only one of the table exists, that is why I am using the parameter, so it shouldn't fail. but I anyhow get the following error:
Msg 7314, Level 16, State 1, Line 25
The OLE DB provider "Microsoft.ACE.OLEDB.16.0" for linked server "LinkedServer" does not contain the table "D100". The table either does not exist or the current user does not have permissions on that table.
So in this code, assume that the parameter is 300. then I get the message above.
Do you know, if there is a way, to limit the query to do not check all the tables, but only the one where the IF condition will be met?
ALTER PROCEDURE[dbo].[Import_data]
#p1 int = 0
AS
BEGIN
SET NOCOUNT ON;
IF(#p1 = 100)
BEGIN
DROP TABLE IF EXISTS Table1
SELECT [Field1], [Field2], [Field3], [Field4], [Field5], [Field6]
INTO Table1
FROM[LinkedServer]...[D100]
END
IF(#p1 = 200)
BEGIN
DROP TABLE IF EXISTS Table2
SELECT[Field1], [Field2], [Field3], [Field4], [Field5], [Field6]
INTO Table2
FROM[LinkedServer]...[D200]
END
IF(#p1 = 300)
BEGIN
DROP TABLE IF EXISTS Table3
SELECT[Field1], [Field2], [Field3], [Field4], [Field5], [Field6]
INTO Table3
FROM[LinkedServer]...[D300]
END
END
I have tried googling it, but I found mostly workarounds as running a sub procedure, but it is not really a clean solution, I think.
Okay, it seems I that I found the answer. Even with an IF statement, the SQL Server validates the entire query before executing it, so the way to overcome it, is to use a Dynamic SQL Query.
"SQL Server Dynamic SQL is a programming technique that allows you to construct SQL statements dynamically at runtime. It allows you to create more general purpose and flexible SQL statement because the full text of the SQL statements may be unknown at compilation."
This is how the query looks now. so instead of multiple IF statements, the query changes dynamically depending on the parameter.
DECLARE #SQL NVARCHAR(MAX)
SET #SQL = N'DROP TABLE IF EXISTS Table1;
SELECT [Field1]
,[Field2]
,[Field3]
,[Field4]
,[Field5]
,[Field6]
INTO Table1
FROM [LinkedServer]...[D' + CONVERT(nvarchar(3),#p1) + N']'
EXEC sp_executesql #SQL

Insert stored procedure data in a temp table in SQL Server

I am trying to insert the data of a stored procedure into a temp table like below
CREATE TABLE #CustomTable3HTML
(
ItemId varchar(30),
ItemId1 varchar(30)
)
INSERT INTO #CustomTable3HTML
EXEC SalesDeals.dbo.prGetDealProposalDetail 17100102, 1
but I am getting this error
Msg 8164, Level 16, State 1, Procedure prGetDealProposalDetail, Line 138 [Batch Start Line 1]
An INSERT EXEC statement cannot be nested.
I figured this is because the stored procedure already has an insert into clause defined and I found out that it can be used only once in the calling chain.
So I started looking for other options and found out about OpenRowSet which I am using as below
SELECT *
INTO #CustomTable3HTML
FROM OPENROWSET('SQLOLEDB','Server=Demo\Demo;Trusted_Connection=Yes;Database=SalesDeals',
'SET NOCOUNT ON;SET FMTONLY OFF;EXEC SalesDeals.dbo.prGetDealProposalDetail 17100102,1')
I am getting an error when I run this SQL command
Access to the remote server is denied because no login-mapping exists.
It works fine when I use a higher level account like sysadmin but fails with the other account which is a normal db owner on the database where I am running this SQL.
There is work around of this. It's not beautiful, but it will work.
In our outer query define a table:
CREATE TABLE #CustomTable3HTML
(
ItemId varchar(30),
ItemId1 varchar(30)
)
Change the procedure adding the following code at the end:
IF OBJECT_ID('tempdb..#CustomTable3HTML')
BEGIN
INSERT INTO #CustomTable3HTML
SELECT ....
END
ELSE
BEGIN
SELECT ....
END
After executing the stored procedure you will have the data in table.

How to insert into table the results of a dynamic query when the schema of the result is unknown a priori?

Observe the following simple SQL code:
CREATE TABLE #tmp (...) -- Here comes the schema
INSERT INTO #tmp
EXEC(#Sql) -- The #Sql is a dynamic query generating result with a known schema
All is good, because we know the schema of the result produced by #Sql.
But what if the schema is unknown? In this case I use Powershell to generate a Sql query like that:
SET #Sql = '
SELECT *
INTO ##MySpecialAndUniquelyNamedGlobalTempTable
FROM ($Query) x
'
EXEC(#Sql)
(I omit some details, but the "spirit" of the code is preserved)
And it works fine, except that there is a severe limitation to what $Query can be - it must be a single SELECT statement.
This is not very good for me, I would like to be able to run any Sql script like that. The problem, is that no longer can I concatenate it to FROM (, it must be executed by EXEC or sp_executesql. But then I have no idea how to collect the results into a table, because I have no idea of the schema of that table.
Is it possible in Sql Server 2012?
Motivation: We have many QA databases across different Sql servers and more often than not I find myself running queries on all of them in order to locate the database most likely to yield best results for my tests. Alas, I am only able to run single SELECT statements, which is inconvenient.
We use SP and OPENROWSET for this purpose.
At first create SP based on a query you need, than use OPENROWSET to get data into temp table:
USE Test
DECLARE #sql nvarchar(max),
#query nvarchar(max)
SET #sql = N'Some query'
IF OBJECT_ID(N'SomeSPname') IS NOT NULL DROP PROCEDURE SomeSPname
SET #query =N'
CREATE PROCEDURE SomeSPname
AS
BEGIN
'+#sql+'
END'
EXEC sp_executesql #query
USE tempdb
IF OBJECT_ID(N'#temp') IS NOT NULL DROP TABLE #temp
SELECT *
INTO #temp
FROM OPENROWSET(
'SQLNCLI',
'Server=SERVER\INSTANCE;Database=Test;Trusted_Connection=yes;',
'EXEC dbo.SomeSPname')
SELECT *
FROM #temp

Use the result of a system stored procedure as a queryable table

Note: the highest linked question does not solve the problem for system stored procedures, but it's close. With help of the commenters, I came to a working answer.
Trying to use statements such as the following for sp_spaceused, throws an error
SELECT * INTO #tblOutput exec sp_spaceused 'Account'
SELECT * FROM #tblOutput
The errors:
Must specify table to select from.
and:
An object or column name is missing or empty. For SELECT INTO statements, verify each column has a name. For other statements, look for empty alias names. Aliases defined as "" or [] are not allowed. Change the alias to a valid name.
When I fully declare a table variable, it works as expected, so it seems to me that the stored procedure does return an actual table.
CREATE TABLE #tblOutput (
name NVARCHAR(128) NOT NULL,
rows CHAR(11) NOT NULL,
reserved VARCHAR(18) NOT NULL,
data VARCHAR(18) NOT NULL,
index_size VARCHAR(18) NOT NULL,
unused VARCHAR(18) NOT NULL)
INSERT INTO #tblOutput exec sp_spaceused 'Response'
SELECT * FROM #tblOutput
Why is it not possible to use a temp table or table variable with the result set of EXECUTE sp_xxx? Or: does a more compact expression exist than having to predefine the full table each time?
(incidentally, and off-topic, Googling for the exact term SELECT * INTO #tmp exec sp_spaceused at the time of writing, returned exactly one result)
TL;DR: use SET FMTONLY OFF with OPENQUERY, details below.
It appears that the link provided by Daniel E. is only part of the solution. For instance, if you try:
-- no need to use sp_addlinkedserver
-- must fully specify sp_, because default db is master
SELECT * FROM OPENQUERY(
[SERVERNAME\SQL2008],
'exec somedb.dbo.sp_spaceused ''Account''')
you will receive the following error:
The OLE DB provider "SQLNCLI10" for linked server "LOCALSERVER\SQL2008" supplied inconsistent metadata for a column. The name was changed at execution time.
I found the solution through this post, and then a blog-post on OPENQUERY, which in turn told me that until SQL2008, you need to use SET FMTONLY OFF. The final solution, which is essentially surprisingly simple (and easier to accomplish since there is no need to specify a loopback linked server), is this:
SELECT * FROM OPENQUERY(
[SERVERNAME\SQL2008],
'SET FMTONLY OFF
EXEC somedb.dbo.sp_spaceused ''Account''')
In addition, if you haven't set DATA-ACCESS, you may get the following error:
Server 'SERVERNAME\SQL2008' is not configured for DATA ACCESS.
This can be remedied by running the following command:
EXEC sp_serveroption 'SERVERNAME\SQL2008', 'DATA ACCESS', TRUE
We cannot SELECT from a stored procedure thats why SELECT * INTO ..Exec sp_ will not work.
To get the result set returned from a store procedure we can INSERT INTO a table.
SELECT INTO statement creates a table on fly and inserts data from the source table/View/Function. The only condition is source table should exist and you should be able to Select from it.
Sql Server doesn't allow you to use SELECT from sp_ therefore you can only use the INSERT INTO statement when executing a stored procedure this means at run time you can add the returned result set into a table and Select from that table at later stage.
INSERT INTO statement requires the destination table name, An existing table. Therefore whether you use a Temp Table, Table variable or Sql server persistent table you will need to create the table first and only they you can use the syntax
INSERT INTO #TempTable
EXECUTE sp_Proc
Using [YOUR DATABASE NAME]
CREATE TABLE [YOURTABLENAME]
(Database_Name Varchar(128),
DataBase_Size VarChar(128),
unallocated_Space Varchar(128),
reserved Varchar(128),
data Varchar(128),
index_size Varchar(128),
unused Varchar(128)
);
INSERT INTO dbo.[YOUR TABLE NAME]
(
Database_Name,
DataBase_Size,
unallocated_Space,
reserved,
data,
index_size,
unused
)
EXEC sp_spaceused #oneresultset = 1
--To get it to return it all as one data set add the nonresultset=1 at the end and viola good to go for writing to a table. :)

Linked Server Insert-Select Performance

Assume that I have a table on my local which is Local_Table and I have another server and another db and table, which is Remote_Table (table structures are the same).
Local_Table has data, Remote_Table doesn't. I want to transfer data from Local_Table to Remote_Table with this query:
Insert into RemoteServer.RemoteDb..Remote_Table
select * from Local_Table (nolock)
But the performance is quite slow.
However, when I use SQL Server import-export wizard, transfer is really fast.
What am I doing wrong? Why is it fast with Import-Export wizard and slow with insert-select statement? Any ideas?
The fastest way is to pull the data rather than push it. When the tables are pushed, every row requires a connection, an insert, and a disconnect.
If you can't pull the data, because you have a one way trust relationship between the servers, the work around is to construct the entire table as a giant T-SQL statement and run it all at once.
DECLARE #xml XML
SET #xml = (
SELECT 'insert Remote_Table values (' + '''' + isnull(first_col, 'NULL') + ''',' +
-- repeat for each col
'''' + isnull(last_col, 'NULL') + '''' + ');'
FROM Local_Table
FOR XML path('')
) --This concatenates all the rows into a single xml object, the empty path keeps it from having <colname> </colname> wrapped arround each value
DECLARE #sql AS VARCHAR(max)
SET #sql = 'set nocount on;' + cast(#xml AS VARCHAR(max)) + 'set nocount off;' --Converts XML back to a long string
EXEC ('use RemoteDb;' + #sql) AT RemoteServer
It seems like it's much faster to pull data from a linked server than to push data to a linked server: Which one is more efficient: select from linked server or insert into linked server?
Update: My own, recent experience confirms this. Pull if possible -- it will be much, much faster.
Try this on the other server:
INSERT INTO Local_Table
SELECT * FROM RemoteServer.RemoteDb.Remote_Table
The Import/Export wizard will be essentially doing this as a bulk insert, where as your code is not.
Assuming that you have a Clustered Index on the remote table, make sure that you have the same Clustered index on the local table, set Trace flag 610 globally on your remote server and make sure remote is in Simple or bulk logged recovery mode.
If you're remote table is a Heap (which will speed things up anyway), make sure your remote database is in simple or bulk logged mode change your code to read as follows:
INSERT INTO RemoteServer.RemoteDb..Remote_Table WITH(TABLOCK)
SELECT * FROM Local_Table WITH (nolock)
The reason why it's so slow to insert into the remote table from the local table is because it inserts a row, checks that it inserted, and then inserts the next row, checks that it inserted, etc.
Don't know if you figured this out or not, but here's how I solved this problem using linked servers.
First, I have a LocalDB.dbo.Table with several columns:
IDColumn (int, PK, Auto Increment)
TextColumn (varchar(30))
IntColumn (int)
And I have a RemoteDB.dbo.Table that is almost the same:
IDColumn (int)
TextColumn (varchar(30))
IntColumn (int)
The main difference is that remote IDColumn isn't set up as as an ID column, so that I can do inserts into it.
Then I set up a trigger on remote table that happens on Delete
Create Trigger Table_Del
On Table
After Delete
AS
Begin
Set NOCOUNT ON;
Insert Into Table (IDColumn, TextColumn, IntColumn)
Select IDColumn, TextColumn, IntColumn from MainServer.LocalDB.dbo.table L
Where not exists (Select * from Table R WHere L.IDColumn = R.IDColumn)
END
Then when I want to do an insert, I do it like this from the local server:
Insert Into LocalDB.dbo.Table (TextColumn, IntColumn) Values ('textvalue', 123);
Delete From RemoteServer.RemoteDB.dbo.Table Where IDColumn = 0;
--And if I want to clean the table out and make sure it has all the most up to date data:
Delete From RemoteServer.RemoteDB.dbo.Table
By triggering the remote server to pull the data from the local server and then do the insert, I was able to turn a job that took 30 minutes to insert 1258 lines into a job that took 8 seconds to do the same insert.
This does require a linked server connection on both sides, but after that's set up it works pretty good.
Update:
So in the last few years I've made some changes, and have moved away from the delete trigger as a way to sync the remote table.
Instead I have a stored procedure on the remote server that has all the steps to pull the data from the local server:
CREATE PROCEDURE [dbo].[UpdateTable]
-- Add the parameters for the stored procedure here
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for procedure here
--Fill Temp table
Insert Into WebFileNamesTemp Select * From MAINSERVER.LocalDB.dbo.WebFileNames
--Fill normal table from temp table
Delete From WebFileNames
Insert Into WebFileNames Select * From WebFileNamesTemp
--empty temp table
Delete From WebFileNamesTemp
END
And on the local server I have a scheduled job that does some processing on the local tables, and then triggers the update through the stored procedure:
EXEC sp_serveroption #server='REMOTESERVER', #optname='rpc', #optvalue='true'
EXEC sp_serveroption #server='REMOTESERVER', #optname='rpc out', #optvalue='true'
EXEC REMOTESERVER.RemoteDB.dbo.UpdateTable
EXEC sp_serveroption #server='REMOTESERVER', #optname='rpc', #optvalue='false'
EXEC sp_serveroption #server='REMOTESERVER', #optname='rpc out', #optvalue='false'
If you must push data from the source to the target (e.g., for firewall or other permissions reasons), you can do the following:
In the source database, convert the recordset to a single XML string (i.e., multiple rows and columns combined into a single XML string).
Then push that XML over as a single row (as a varchar(max), since XML isn't allowed over linked databases in SQL Server).
DECLARE #xml XML
SET #xml = (select * from SourceTable FOR XML path('row'))
Insert into TempTargetTable values (cast(#xml AS VARCHAR(max)))
In the target database, cast the varchar(max) as XML and then use XML parsing to turn that single row and column back into a normal recordset.
DECLARE #X XML = (select '<toplevel>' + ImportString + '</toplevel>' from TempTargetTable)
DECLARE #iX INT
EXEC sp_xml_preparedocument #ix output, #x
insert into TargetTable
SELECT [col1],
[col2]
FROM OPENXML(#iX, '//row', 2)
WITH ([col1] [int],
[col2] [varchar](128)
)
EXEC sp_xml_removedocument #iX
I've found a workaround. Since I'm not a big fun of GUI tools like SSIS, I've reused a bcp script to load table into csv and vice versa. Yeah, it's an odd case to have the bulk operation support for files, but tables. Feel free to edit the following script to fit your needs:
exec xp_cmdshell 'bcp "select * from YourLocalTable" queryout C:\CSVFolder\Load.csv -w -T -S .'
exec xp_cmdshell 'bcp YourAzureDBName.dbo.YourAzureTable in C:\CSVFolder\Load.csv -S yourdb.database.windows.net -U youruser#yourdb.database.windows.net -P yourpass -q -w'
Pros:
No need to define table structures every time.
I've tested and it worked way faster than inserting directly through
the LinkedServer.
It's easier to manage than XML (which is limited to
varchar(max) length anyway).
No need of an extra layout of abstraction (tools like SSIS).
Cons:
Using the external tool bcp through the xp_cmdshell interface.
Table properties will be lost after ex/im-poring csv (i.e. datatype, nulls,length, separator within value, etc).

Resources