Problem with SET FMTONLY ON - sql-server

I'm executing stored procedures using SET FMTONLY ON, in order to emulate what our code generator does. However, it seems that the results are cached when executed like this, as I'm still getting a Conversion failed error from a proc that I have just dropped! This happens even when I execute the proc without SET FMTONLY ON.
Can anyone please tell me what's going on here?

Some statements will still be executed, even with SET FMTONLY ON. You "Conversion failed" error could be from something as simple as a set variable statement in the stored proc. For example, this returns the metadata for the first query, but throws an exception when it runs the last statement:
SET FMTONLY on
select 1 as a
declare #a int
set #a = 'a'
As for running a dropped procedure, that's a new one to me. SQL Server uses the system tables to determine the object to execute, so it doesn't matter if the execution plan is cached for that object. If you drop it, it is deleted from the system tables, and should never be executable. Could you please query sysobjects (or sys.objects) just before you execute the procedure? I expect you'll find that you haven't dropped it.

This sounds like a client-side error. Do you get the same message when running through SQL Management Studio?
Have you confirmed that there isn't another procedure with the same name that's owned by a different schema/user?

DDL statements are parsed, but ignored when run if SET FMTONLY ON has been executed on the connection. So if you drop a proc, table, etc when FMTONLY is ON, the statement is parsed, but the action is not executed.
Try this to verify
SET FMTONLY OFF
--Create table to test on
CREATE TABLE TestTable (Column1 INT, Column2 INT)
--insert 1 record
INSERT INTO TestTable (Column1, Column2)
VALUES (1,2)
--validate the record was inserted
SELECT * FROM TestTable
--now set format only to ON
SET FMTONLY ON
--columns are returned, but no data
SELECT * FROM TestTable
--perform DDL statement with FMTONLY ON
DROP TABLE TestTable
--Turn FMTONLY OFF again
SET FMTONLY OFF
--The table was dropped above, so this should not work
SELECT * FROM TestTable
DROP TABLE TestTable
SELECT * FROM TestTable

Related

Why is the table inside a non-met IF being validated before condition is met, resulting in error if table does not exist?

I am trying to execute a procedure with a parameter, and depending on the value of the parameter, three different IF conditions will be evaluated to verify which query it will execute from a linked server.
But when I execute the query, it seems to be checking if the tables inside all the IF exists before starting the query. And I know that only one of the table exists, that is why I am using the parameter, so it shouldn't fail. but I anyhow get the following error:
Msg 7314, Level 16, State 1, Line 25
The OLE DB provider "Microsoft.ACE.OLEDB.16.0" for linked server "LinkedServer" does not contain the table "D100". The table either does not exist or the current user does not have permissions on that table.
So in this code, assume that the parameter is 300. then I get the message above.
Do you know, if there is a way, to limit the query to do not check all the tables, but only the one where the IF condition will be met?
ALTER PROCEDURE[dbo].[Import_data]
#p1 int = 0
AS
BEGIN
SET NOCOUNT ON;
IF(#p1 = 100)
BEGIN
DROP TABLE IF EXISTS Table1
SELECT [Field1], [Field2], [Field3], [Field4], [Field5], [Field6]
INTO Table1
FROM[LinkedServer]...[D100]
END
IF(#p1 = 200)
BEGIN
DROP TABLE IF EXISTS Table2
SELECT[Field1], [Field2], [Field3], [Field4], [Field5], [Field6]
INTO Table2
FROM[LinkedServer]...[D200]
END
IF(#p1 = 300)
BEGIN
DROP TABLE IF EXISTS Table3
SELECT[Field1], [Field2], [Field3], [Field4], [Field5], [Field6]
INTO Table3
FROM[LinkedServer]...[D300]
END
END
I have tried googling it, but I found mostly workarounds as running a sub procedure, but it is not really a clean solution, I think.
Okay, it seems I that I found the answer. Even with an IF statement, the SQL Server validates the entire query before executing it, so the way to overcome it, is to use a Dynamic SQL Query.
"SQL Server Dynamic SQL is a programming technique that allows you to construct SQL statements dynamically at runtime. It allows you to create more general purpose and flexible SQL statement because the full text of the SQL statements may be unknown at compilation."
This is how the query looks now. so instead of multiple IF statements, the query changes dynamically depending on the parameter.
DECLARE #SQL NVARCHAR(MAX)
SET #SQL = N'DROP TABLE IF EXISTS Table1;
SELECT [Field1]
,[Field2]
,[Field3]
,[Field4]
,[Field5]
,[Field6]
INTO Table1
FROM [LinkedServer]...[D' + CONVERT(nvarchar(3),#p1) + N']'
EXEC sp_executesql #SQL

SQL Server 2012: get the SQL code that fired a trigger, without DBCC INPUTBUFFER or sys.dm_exec_input_buffer

Is there a way of getting the SQL code that fired a trigger from inside the fired trigger, without using DBCC INPUTBUFFER or sys.dm_exec_input_buffer?
I need this for a trigger that logs the new value, the old value and the statement that made the change in that table.
Even though DBCC INPUTBUFFER resolves the challenge, I cannot use it because I need to use "INSERT INTO ... EXEC" in order to get the query that fired the trigger and the trigger is fired by many statements that already use "INSERT INTO ... EXEC", so I will get the error
An INSERT EXEC statement cannot be nested
From my research, sys.dm_exec_input_buffer might do the trick, but I cannot use it since it is available only for SQL Server 2014 SP4 and newer (as mentioned here: Get last command in SQL Server without DBCC INPUTBUFFER), and I am using an older version.
I have tried several ways of solving the problem but without success. I cannot get the SQL statement that fired the trigger but only the last executing statement which is the trigger.
To see the problem, take a look at the following code:
--Create the table that will have the trigger
CREATE TABLE [dbo].[___testTrigger]
(
[text] [NVARCHAR!(50) NOT NULL
) ON [PRIMARY]
GO
CREATE TRIGGER dbo.TestTriggerAuditLog
ON dbo.___testTrigger
AFTER INSERT,DELETE,UPDATE
AS
BEGIN
SET NOCOUNT ON;
--Version 1: without "INSERT INTO ... EXEC" but does not get the text of the statement that fired the trigger. Instead, it gets the current running query, which is the trigger
SELECT sqltext.TEXT,
req.session_id,
req.status,
req.command,
req.cpu_time,
req.total_elapsed_time
FROM sys.dm_exec_requests req
CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS sqltext
WHERE req.session_id = ##SPID
--Version 2: gets the statement that fired the trigger, but we need to use "INSERT INTO ... EXEC"
DECLARE #inputbuffer TABLE (EventType NVARCHAR(30),Parameters INT,EventInfo NVARCHAR(4000))
INSERT INTO #inputbuffer EXEC('dbcc inputbuffer('+##Spid+') WITH NO_INFOMSGS')
SELECT * FROM #inputbuffer AS I
END
I know that in a trigger is not ok to have SELECT statements! I did it just to make the example simpler.
Now, we can insert some data to see what we get:
--test
INSERT INTO dbo.___testTrigger (text)
VALUES (N'This is a test test')
We will get the 2 selects returning different results, as can be seen in the bellow image.
Any ideas of what could I use to get the same result as DBCC INPUTBUFFER but without using "INSERT INTO ... EXEC" and without using sys.dm_exec_input_buffer as it is not available in my SQL Server version?
create table dbo.abcd(id int);
go
create trigger dbo.triggerabc on dbo.abcd for insert, update, delete
as
begin
declare #t table(query nvarchar(4000));
insert into #t (query)
select EventInfo
from OPENROWSET('SQLNCLI', 'Server=localhost;Trusted_Connection=yes;',
'
declare #spid nvarchar(10), #sql nvarchar(1000);
select #spid = cast(session_id as nvarchar(10))
from sys.dm_exec_requests
where session_id > 50
and wait_type = ''OLEDB''
and wait_resource like ''SQLNCLI%(SPID='' + cast(##spid as varchar(10)) + '')'';
select #sql = ''dbcc inputbuffer('' + #spid + '') WITH NO_INFOMSGS'';
exec(#sql) with result sets( (EventType NVARCHAR(30),Parameters SMALLINT,EventInfo NVARCHAR(4000)) );
'
) ;
select * from #t;
end
go
insert into abcd(id) values(123)
go
insert into abcd(id)
exec('select 456')
go
drop table abcd
go
Here's a very simple solution.
But first, since triggers don't fire on select it probably isn't very accurate to refer to "queries" firing the trigger. It would probably be more accurate to call them "statements."
Anyway, add a column to your table such as StatementName varchar(10) and then in each insert statement that will fire the trigger, add a value such as 'Statement1', 'Statement2', etc.
Then the trigger can just check the inserted row and know what statement fired the trigger.

Stored procedure that creates #table - unable to find it in list of tables

I'm trying to run a stored procedure that creates a local table - #table1
The stored procedure is supposed to look for values and create the table and insert the values into it...
INSERT INTO #table1
I execute the stored procedure and it shows that 1 row() affected, however, I am unable to find this table in the list of my tables. Why am I not able to see it or access it?
EDIT: I'm running the stored procedure inside SQL Server against a database. At the end of the stored procedure, the last line is:
Select * from #table1
Thanks.
The #table is a local temp table. It does not exist as a permanent table that you can look for outside the scope of the stored proc. Once the stored proc is run, the temp table is dropped because it is no longer in scope. Temp tables are stored temporarily in the tempdb database but with a different name because two people running the stored procedure at the same time would each have a table that can be referenced in the proc as #table but it would be two separate tables in the tempdb.
Now if what you are doing is looking to see what is in #table at a point in the stored proc in order to troubleshoot the proc, then you need to set thing up in the proc so that you can see the results at different stages or when you hit a certain state such as an error.
This could be something like adding a #debug variable to the proc so that when you are in debug mode, you can select the results to the screen when you are running something like:
CREATE PROC test_proc (#Id INT, #debug BIT = 0)
AS
CREATE TABLE #temp(id INT)
INSERT INTO #temp
VALUES (#Id), (1), (2)
IF #debug = 1
BEGIN
SELECT * FROM #temp
END
UPDATE #temp
SET Id = id-1
IF #debug = 1
BEGIN
SELECT * FROM #temp
END
GO
You would then execute the proc without debugging as so (note that since I am not returning something or inserting to permanent tables, this proc will insert to #temp but you can't see anything. I just didn't want to get complicated here, the steps of the proc will vary depending on what you want to do, the concept I am trying to show is how to use the debug variable):
EXEC test_proc #Id= 5
and with debugging as
EXEC test_proc #Id= 5, #debug= 1
Or it might involved using a table variable instead (because they don't get rolled back on error) and then inserting the data from that table variable into a logging table after the rollback occurs in the Catch block, so that you can see the values at the time the error occurred.
Without knowing more about why you are looking for #temp and what the data means and is used for, it is hard to say what you need to do.
Did you tried refreshing the tables after exceuting Stored procedure

Why is SQLExecute returning SQL_NO_DATA when running a procedure with NOCOUNT ON?

I have the following procedure, which I don't want SQL to show affected row counts:
CREATE PROCEDURE UP_STOREDPROC #field VARCHAR(15)
AS
SET NOCOUNT ON;
INSERT INTO MYTABLE (field) VALUES (#field);
GO
And a C project using unixODBC/FreeTDS to execute this proc after a prepared statement (SQLPrepare).
After the preparation, when it executes the statement by calling SQLExecute, it always return SQL_NO_DATA (100).
But, from ODBC documentation:
If SQLExecute executes a searched update, insert, or delete statement
that does not affect any rows at the data source, the call to
SQLExecute returns SQL_NO_DATA.
So, it's not a searched insert. If the first line of the procedure is changed to SET NOCOUNT OFF then everything works fine.
What is wrong?

Use the result of a system stored procedure as a queryable table

Note: the highest linked question does not solve the problem for system stored procedures, but it's close. With help of the commenters, I came to a working answer.
Trying to use statements such as the following for sp_spaceused, throws an error
SELECT * INTO #tblOutput exec sp_spaceused 'Account'
SELECT * FROM #tblOutput
The errors:
Must specify table to select from.
and:
An object or column name is missing or empty. For SELECT INTO statements, verify each column has a name. For other statements, look for empty alias names. Aliases defined as "" or [] are not allowed. Change the alias to a valid name.
When I fully declare a table variable, it works as expected, so it seems to me that the stored procedure does return an actual table.
CREATE TABLE #tblOutput (
name NVARCHAR(128) NOT NULL,
rows CHAR(11) NOT NULL,
reserved VARCHAR(18) NOT NULL,
data VARCHAR(18) NOT NULL,
index_size VARCHAR(18) NOT NULL,
unused VARCHAR(18) NOT NULL)
INSERT INTO #tblOutput exec sp_spaceused 'Response'
SELECT * FROM #tblOutput
Why is it not possible to use a temp table or table variable with the result set of EXECUTE sp_xxx? Or: does a more compact expression exist than having to predefine the full table each time?
(incidentally, and off-topic, Googling for the exact term SELECT * INTO #tmp exec sp_spaceused at the time of writing, returned exactly one result)
TL;DR: use SET FMTONLY OFF with OPENQUERY, details below.
It appears that the link provided by Daniel E. is only part of the solution. For instance, if you try:
-- no need to use sp_addlinkedserver
-- must fully specify sp_, because default db is master
SELECT * FROM OPENQUERY(
[SERVERNAME\SQL2008],
'exec somedb.dbo.sp_spaceused ''Account''')
you will receive the following error:
The OLE DB provider "SQLNCLI10" for linked server "LOCALSERVER\SQL2008" supplied inconsistent metadata for a column. The name was changed at execution time.
I found the solution through this post, and then a blog-post on OPENQUERY, which in turn told me that until SQL2008, you need to use SET FMTONLY OFF. The final solution, which is essentially surprisingly simple (and easier to accomplish since there is no need to specify a loopback linked server), is this:
SELECT * FROM OPENQUERY(
[SERVERNAME\SQL2008],
'SET FMTONLY OFF
EXEC somedb.dbo.sp_spaceused ''Account''')
In addition, if you haven't set DATA-ACCESS, you may get the following error:
Server 'SERVERNAME\SQL2008' is not configured for DATA ACCESS.
This can be remedied by running the following command:
EXEC sp_serveroption 'SERVERNAME\SQL2008', 'DATA ACCESS', TRUE
We cannot SELECT from a stored procedure thats why SELECT * INTO ..Exec sp_ will not work.
To get the result set returned from a store procedure we can INSERT INTO a table.
SELECT INTO statement creates a table on fly and inserts data from the source table/View/Function. The only condition is source table should exist and you should be able to Select from it.
Sql Server doesn't allow you to use SELECT from sp_ therefore you can only use the INSERT INTO statement when executing a stored procedure this means at run time you can add the returned result set into a table and Select from that table at later stage.
INSERT INTO statement requires the destination table name, An existing table. Therefore whether you use a Temp Table, Table variable or Sql server persistent table you will need to create the table first and only they you can use the syntax
INSERT INTO #TempTable
EXECUTE sp_Proc
Using [YOUR DATABASE NAME]
CREATE TABLE [YOURTABLENAME]
(Database_Name Varchar(128),
DataBase_Size VarChar(128),
unallocated_Space Varchar(128),
reserved Varchar(128),
data Varchar(128),
index_size Varchar(128),
unused Varchar(128)
);
INSERT INTO dbo.[YOUR TABLE NAME]
(
Database_Name,
DataBase_Size,
unallocated_Space,
reserved,
data,
index_size,
unused
)
EXEC sp_spaceused #oneresultset = 1
--To get it to return it all as one data set add the nonresultset=1 at the end and viola good to go for writing to a table. :)

Resources