Silent exception shown in SQL Server Profiler - sql-server

The following SQL script produces an Invalid object name '#temp' exception on SQL Server Profile, but neither SQL Server Management Studio nor sqlcmd raise the exception:
create table #temp (id int)
insert into #temp (id) values (1)
I only caught it by running SQL Server Profiler with the event "Exception" turned on, which can be set on the "Events Selection" tab when configuring the trace properties.
Since exceptions tend to slow down the server, I tried a similar code using a table variable:
declare #temp table (id int)
insert into #temp (id) values (1)
The code above not only avoid the exception, but is also faster when calling it repeatedly, which comproves the performance penalty by using a temporary table:
if (db_id('performance_test') is null)
create database performance_test
go
use performance_test
go
/* --------------------------- */
/* stress test with temp table */
/* --------------------------- */
declare
#i int,
#sql varchar(max),
#start_time datetime,
#end_time datetime
set #i = 0
set #sql = 'create table #temp (id int)' + Char(13) + Char(10) + 'insert into #temp (id) values (1)'
set #start_time = getdate()
while (#i < 10000)
begin
exec (#sql)
set #i = #i + 1
end
set #end_time = getdate()
select [Elapsed milliseconds] = datediff(millisecond, #start_time, #end_time) -- outputs 17090 milliseconds
go
/* ------------------------------- */
/* stress test with table variable */
/* ------------------------------- */
declare
#i int,
#sql varchar(max),
#start_time datetime,
#end_time datetime
set #i = 0
set #sql = 'declare #temp table (id int)' + char(13) + char(10) + 'insert into #temp (id) values (1)'
set #start_time = getdate()
while (#i < 10000)
begin
exec (#sql)
set #i = #i + 1
end
set #end_time = getdate()
select [Elapsed milliseconds] = datediff(millisecond, #start_time, #end_time) -- outputs 10010 milliseconds
I often read that a local temporary table and a table variable can be used interchangeably (if using a single batch, of course), however I think the demonstrated behavior above can prove otherwise.
Although it's kinda obvious, it's worth noting that the exception is not raised if we separate the create table from insert into in different batches:
create table #temp (id int)
go
insert into #temp (id) values (1)
Is this silent exception a SQL Server's bug or is it something that could be called "a feature by design"? Maybe it's simply better to always use table variables instead of temporary tables, given the silent exception above.
P.S.: I've tested on both SQL Server 2014 and SQL Server 2016 Developer editions, getting the same results.

As pointed out by #JeroenMostert, the exception "Invalid object name" is probably resolved in the batch recompilation (which I didn't know about). It makes perfect sense considering the "Deferred name Resolution" process, which is a known subject in the SQL Server community.
The first link below is a question I've posted on MSDN and was answered by Mohsin_A_Khan, talking about "Deferred name Resolution" process in SQL Server. The other two links contributes to understand how it actually works:
Getting "Invalid object name" by creating a temp table and inserting rows right away
How to find what caused errors reported in a SQL Server profiler trace?
Deferred Name Resolution and Compilation
Since the "Invalid object name" is expected due to the recompilation process and shouldn't be avoided by simply replacing the temporary table with a table variable (again, as pointed out by #JeroenMostert), I consider this question answered.

Related

TempDB caching issues with SQL Server 2019

I have a problem that works in SQL Server 2017 but not in SQL Server 2019. It is related to tempdb caching. This has to do with creating temporary tables in stored procedures and changing its structure using dynamic SQL. We have a need to do that for various dynamic reporting needs. The first time it is called, the structure is cached and subsequent call to the procedure fails or returns invalid results. How do I prevent caching of such tables? Below is some sample code and how come it works in 2017. Help appreciated.
CREATE PROCEDURE [dbo].[tempDBCachingCheck]
#yearList varchar(max)
AS
BEGIN
SET NOCOUNT ON
DECLARE #yearCount int
DECLARE #yearCounter INT
DECLARE #yearValue INT
DECLARE #sql nvarchar(max)
-- With table variable
DECLARE #tempYearList TABLE (id INT IDENTITY(1,1), rpt_yr int)
INSERT INTO #tempYearList (rpt_yr)
SELECT value FROM STRING_SPLIT(#yearList, ',');
SELECT * FROM #tempYearList
--------------------------------------------------------------------
--With temporary table, since we will be altering this with dynamic sql
CREATE TABLE #returnTable (id INT IDENTITY(1,1))
-- Tried adding a named constraint to not make it cache, but does not work
ALTER TABLE #returnTable
ADD CONSTRAINT UC_ID UNIQUE (id);
SELECT #yearCount = COUNT(*) FROM #tempYearList
-- Add the years as columns to the return table to demostrate the problem
SET #sql = N'ALTER TABLE #returnTable ADD '
SET #yearCounter = 1
WHILE #yearCounter <= #yearCount
BEGIN
SELECT #yearValue = rpt_yr FROM #tempYearList WHERE id = #yearCounter
IF #yearCounter > 1
SET #Sql = #Sql + N', '
SET #sql = #sql + N' [' + convert(varchar(20), #yearValue) + N'] float'
SET #yearCounter = #yearCounter + 1
END
EXECUTE sp_executesql #sql
SELECT * FROM #returnTable
-- No need to drop the temporary tables but doing just in case
DROP TABLE #returnTable
END
GO
-- run these statements and you will see the second call with return the cached #returnTable
EXEC tempDBCachingCheck '2019,2020'
EXEC tempDBCachingCheck '2017,2018,2019,2020'
GO
-- Clear temp table cache and call in reverse order, then will hit an error
-- 'A severe error occurred on the current command. The results, if any, should be discarded.'
USE tempDB
GO
DBCC FREEPROCCACHE
GO
EXEC tempDBCachingCheck '2017,2018,2019,2020'
EXEC tempDBCachingCheck '2019,2020'
GO
It seems this has been fixed in one of cummulative update. The description seems to match:
KB4538853:
When you repeatedly run a stored procedure that uses temporary table with indexes on SQL Server 2019, the client may receive an unexpected error with message "A severe error occurred on the current command" and an access violation exception is recorded on the SQL Server. If the same workload is executed on any previous major version of SQL Server, this issue does not occur.
Dan Guzman's recommendation to install newest CU is the way to go.
Using: EXEC tempDBCachingCheck '2017,2018,2019,2020' WITH RECOMPILE could help as well.

Why #temptable instead #tempTable causing a timeout error?

I have a stored procedure called myStoredProcedure in SQL Server 2008 including a code block like this:
...
declare #tempTable table
(
Id int,
Name varchar(100),
Category varchar(50),
Volume int
)
while #startTime<#endTime
begin
insert into #tempTable
EXEC R52_Calculations #param1, #param2, #param3
set #startTime = DATEADD(YEAR,1,#startTime)
end
select * from #tempTable
In this way, the stored procedure is working very well. I can connect to a table to this stored procedure in the SSRS 2008 without any warning or error. However, when I change #tempTable variable into #tempTable like below, I am getting a TimeOut error when I try to connect a table on SSRS 2008 to the updated stored procedure, even though the stored procedure is working very well again in SQL Server.
...
create table #tempTable
(
Id int,
Name varchar(100),
Category varchar(50),
Volume int
)
while #startTime<#endTime
begin
insert into #tempTable
EXEC R52_Calculations #param1, #param2, #param3
set #startTime = DATEADD(YEAR,1,#startTime)
end
select * from #tempTable
This is the error:
Timeout expired. The timeout period elapsed prior to completion of
the operation or the server is not responding. Warning: Null value is
eliminated by an aggregate or other SET operation.
Some more points:
when I remove the "while" loop and do the process only 1 time, there is no error occurring when I use #tempTable.
if I use #temptable (variable one), there is no error occurring at all.
Note that both queries are working fine in SQL Server, the errors are occurring when I try to connect a table on SSRS 2008 to the stored procedure
.
I could not find the reason why #temptable is causing an error. Any clue or help I will appreciate. Thanks.

Call Stored Procedure Repeatedly for version comparison

SSMS: 2008 R2
We are having our software system updated, which may contain an unknown number of undocumented changes to the way data is entered and stored in our database. We have asked for documentation, but only have schema compares for "physical" changes to the database, not the way the data is treated. They may change in the future, but for now we have to assume not.
In order to check that our stored procedures work as expected after the update, we would like to run a sample of procedures using a sample of parameters before and after the update to compare the actual data results. The stored procedures here all take a single Id as the parameter (they are used to make SSRS reports within the software system)
I have set some things up, but I am having problems with my approach and would welcome any suggestions about either a better way to do things, or how to fix my approach. The problem is that an error is returned whenever a called stored procedure uses a temporary table. Here is what I have done:
Made a script to get a random sample of Ids for paramaters (only one table used at the moment - that's fine).
ALTER PROC [dbo].[UpdateValidation_GET_RandomIdSample](#TestSizePercent DECIMAL(6,3))
AS
-- This table is already created and will persist both sides of the update
--CREATE TABLE Live_Companion.dbo.UpdateValidationIds
--( Id INT IDENTITY(1,1)NOT NULL PRIMARY KEY CLUSTERED
-- ,MyTableId NT NULL)
IF #TestSizePercent > 100 RAISERROR('Do you even percent, bro?',16,1)
DECLARE #SQL VARCHAR(255)
TRUNCATE TABLE UpdateValidationIds
SET #SQL =
'INSERT dbo.UpdateValidationIds(Id)
SELECT TOP ' + CONVERT(VARCHAR(10),#TestSizePercent) + ' PERCENT ID FROM Live.dbo.MyTable ORDER BY NEWID()'
EXEC (#SQL)
Made a second script to run a stored procedure for each Id in the table:
ALTER PROC [dbo].[UpdateValidation_GET_ProcedureResultsManyTimes](#Procedure_Name VARCHAR(255))
AS
--DECLARE #Procedure_Name VARCHAR(255) = 'Live_Companion.dbo.MyProc'
DECLARE #ID INT
DECLARE #GET_ID CURSOR
DECLARE #SQL VARCHAR(MAX) = ''
DECLARE #MyTableId INT
DECLARE #FirstRun BIT = 1
SET #GET_ID = CURSOR FOR
SELECT Id FROM Live_Companion.dbo.UpdateValidationIds
WHERE MyTableId IS NOT NULL
OPEN #GET_ID
FETCH NEXT FROM #GET_ID INTO #ID
WHILE ##FETCH_STATUS = 0
BEGIN
SELECT #MyTableId = MyTableId FROM Live_Companion.dbo.UpdateValidationIds
WHERE Id = #ID
IF #FirstRun = 1
BEGIN
SET #SQL = 'SELECT * INTO #ProcedureOutput FROM OPENROWSET(''SQLNCLI'',''Server=SQL1;Trusted_Connection=yes;'',''EXEC ' + #Procedure_Name + ' ' + CONVERT(VARCHAR(50),#MyTableId) + ''');'
SET #FirstRun = 0
END
ELSE
BEGIN
SET #SQL = #SQL + '
INSERT #ProcedureOutput SELECT * FROM OPENROWSET(''SQLNCLI'',''Server=SQL1;Trusted_Connection=yes;'',''EXEC ' + #Procedure_Name + ' ' + CONVERT(VARCHAR(50),#MyTableId) + ''');'
END
FETCH NEXT FROM #GET_ID INTO #ID
END
SET #SQL = #SQL + '
SELECT * FROM #ProcedureOutput
DROP TABLE #ProcedureOutput'
EXEC (#SQL)
CLOSE #GET_ID
DEALLOCATE #GET_ID
So now I should be able to execute the second procedure for various stored procedures and output the results to file over a range of Ids, then repeat using the saved (initially random) Ids again after the update and compare the results.
The trouble is, it fails when any of the called procedures use a temporary table:
EDIT:
Error Message returned:
Cannot process the object "EXEC Live_Companion.dbo.MyProc 12345". The
OLE DB provider "SQLNCLI10" for linked server "(null)" indicates that
either the object has no columns or the current user does not have
permissions on that object.
Any suggestions or ideas for how to proceed?

SSIS has validation error on SQL Server stored procedure

I'm trying to call a custom stored procedure in SQL Server 2008 R2 from SSIS in Visual Studio 2012. I wrote and tested the stored procedure in SSMS 2012 and it works as expected.
However, when I try to place it in an OLE DB Command component I receive a Divide by 0 error when I refresh the component or when the SSIS package validates.
Here's the code for the stored procedure:
CREATE PROCEDURE [ldg].[2015HRUpdate(TEST)]
#Employee varchar(20), -- maps to EM.Employee, primary key
#Title varchar(50), -- maps to EM.Title
#PayRate varchar(50) = '0', -- maps to EM.JobCostRate, convert to decimal
-- #Percentage Decimal(19,4) = 0, -- workaround
#OldPayRate Decimal(19,4) = 0, -- used to calculate Employees_SalaryHistory.Custnprcent, convert to decimal
#LaborCategory varchar(50) = '0', -- maps to EM.BillingCategory, convert to small int
#EmployeeDesignation varchar(50), -- maps to EmployeeCustomTabFields.CustEmployeeDesg
#FSLAStatus varchar(50), -- maps to EmployeeCustomTabFields.CustFSLAStatus
#Supervisor varchar(20), -- maps to EM.Supervisor
#SupervisorName varchar(255), -- maps to Employees_SalaryHistory.custSuper
#ModUser nvarchar(20),
#ModDate datetime
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Convert data types to match database data types
declare #JobCostRate decimal(19,4);
declare #OldJobCostRate decimal(19,4);
declare #BillingCategory smallint;
declare #Percent decimal(19,4);
if #PayRate is null or #PayRate = ''
set #PayRate = '0';
set #JobCostRate = CONVERT(decimal(19,4), #PayRate);
set #OldJobCostRate = #OldPayRate;
/* this works in T-SQL but when SSIS tries to validate I get a div/0 error */
if #OldJobCostRate != 0
begin
set #Percent = ((#JobCostRate - #OldJobCostRate)/#OldJobCostRate) * 100; --errors out right here with a divide by 0 error.
--set #Percent = 0;
end
else
begin
set #Percent = 0;
end
set #BillingCategory = CONVERT(smallint, #LaborCategory);
-- SQL statements for procedure here
-- Update EM table
-- Update EmployeeCustomTabFields table
-- Insert into Salary history table
END
GO
I have placed a comment on the line that produces the error. If I comment that line out and uncomment the one below it SSIS will validate the procedure without issue.
I finally worked around the issue by creating a derived field in the ETL but I would like to know why SSIS/OLE-DB is causing this issue for the next time it pops up.
Thanks,
Roy
If you alter your procedure to look like
SET NOCOUNT ON;
-- This is a bloody hack to get SSIS to be happy about metadata.
IF 1=2
BEGIN
SELECT 1 AS StupidHackery;
END
I believe you'll get around this issue. The root cause is that SSIS wants to validate the metadata from the proc and doesn't actually evaluate the logic in there. I don't have any definitive resources on the matter, pity, but for me at least, I could recreate your issue and by using this stupid hack, get around it. I've had to use the same thing when dealing with temporary tables.

Generic Insert stored proc : Runtime error

The following code generates the primaey key for the new record to be inserted and inserts the record into a table, whose name and the values to be inserted are given as parameters to the stored procedure. I am getting a runtime error. I am using Visual Studio 2005 to work with SQL Server 2005 Express Edition
ALTER PROCEDURE spGenericInsert
(
#insValueStr nvarchar(300),
#tblName nvarchar(10)
)
AS
DECLARE #sql nvarchar(400)
DECLARE #params nvarchar(200)
DECLARE #insPrimaryKey nvarchar(10)
DECLARE #rowCountVal integer
DECLARE #prefix nvarchar(5)
--following gets the rowcount of the table--
SELECT #rowCountVal = ISNULL(SUM(spart.rows), 0)
FROM sys.partitions spart
WHERE spart.object_id = object_id(#tblName) AND spart.index_id < 2
SET #rowCountVal = #rowCountVal+1
--Following Creates the Primary Key--
IF #tblName = 'DEFECT_LOG'
SET #prefix='DEF_'
ELSE IF #tblName='INV_Allocation_DB'
SET #prefix='INV_'
ELSE IF #tblName='REQ_Master_DB'
SET #prefix='REQ_'
ELSE IF #tblName='SW_Master_DB'
SET #prefix='SWI_'
ELSE IF #tblName='HW_Master_DB'
SET #prefix='HWI_'
SET #insPrimaryKey= #prefix + RIGHT(replicate('0',5)+ convert(varchar(5),#rowCountVal),5) -- returns somethin like 'DEF_00005'
-- Following is for inserting into the table --
SELECT #sql = N' INSERT INTO #tableName VALUES ' +
N' ( #PrimaryKey , #ValueStr )'
SELECT #params = N'#tableName nvarchar(10), ' +
N'#PrimaryKey nvarchar(10), ' +
N'#ValueStr nvarchar(300)'
EXEC sp_executesql #sql, #params, #tableName=#tblName, #PrimaryKey=#insPrimaryKey, #ValueStr=#insValueStr
Output Message:
Running [dbo].[spGenericInsert] ( #insValueStr = 2,"Hi",1/1/1987, #tblName = DEFECT_LOG ).
Must declare the table variable "#tableName".
No rows affected.
(0 row(s) returned)
#RETURN_VALUE = 0
Finished running [dbo].[spGenericInsert].
You are going to have to concatenate the table name directly into the string, as this cannot be parameterized:
SELECT #sql = N' INSERT INTO [' + #tblName + '] VALUES ' +
N' ( #PrimaryKey , #ValueStr )'
SELECT #params = N'#PrimaryKey nvarchar(10), ' +
N'#ValueStr nvarchar(300)'
To prevent injection attacks, you should white-list this table name. This also isn't robust if the table has other non-nullable columns, etc.
note: Personally, though, I don't think this is a good use of TSQL; it might be more appropriate to construct the command in the client (C# or whatever), and execute it as a parameterized command. There are use-cases for dynamic-SQL, but I'm not sure this is a good example of one.
Better yet, use your preferred ORM tool (LINQ-to-SQL, NHibernate, LLBLGen, Entity Framework, etc) to do all this for you, and concentrate on your actual problem domain.
White list essentially means make sure that the table being passed in is a valid table that you want them to be able to insert into. Let's just say for arguments sake that table name is user provided, the user could then start inserting records into system tables.
You can do a white list check by bouncing the table name of the sysobjects table:
select * from sysobjects where name=#tblname and xType='U'
However as Marc suggested this is not a good use of TSQL, and your better off handling this in the app tier as a paramatized query.
Agree with Marc- overall this is an extremely poor idea. Generic inserts/updates or deletes cause problems for the database eventually.
Another point is that this process will have problems when two users run simulutaneously against the same table as they will try to insert the same Primary Key.

Resources