I have a problem that works in SQL Server 2017 but not in SQL Server 2019. It is related to tempdb caching. This has to do with creating temporary tables in stored procedures and changing its structure using dynamic SQL. We have a need to do that for various dynamic reporting needs. The first time it is called, the structure is cached and subsequent call to the procedure fails or returns invalid results. How do I prevent caching of such tables? Below is some sample code and how come it works in 2017. Help appreciated.
CREATE PROCEDURE [dbo].[tempDBCachingCheck]
#yearList varchar(max)
AS
BEGIN
SET NOCOUNT ON
DECLARE #yearCount int
DECLARE #yearCounter INT
DECLARE #yearValue INT
DECLARE #sql nvarchar(max)
-- With table variable
DECLARE #tempYearList TABLE (id INT IDENTITY(1,1), rpt_yr int)
INSERT INTO #tempYearList (rpt_yr)
SELECT value FROM STRING_SPLIT(#yearList, ',');
SELECT * FROM #tempYearList
--------------------------------------------------------------------
--With temporary table, since we will be altering this with dynamic sql
CREATE TABLE #returnTable (id INT IDENTITY(1,1))
-- Tried adding a named constraint to not make it cache, but does not work
ALTER TABLE #returnTable
ADD CONSTRAINT UC_ID UNIQUE (id);
SELECT #yearCount = COUNT(*) FROM #tempYearList
-- Add the years as columns to the return table to demostrate the problem
SET #sql = N'ALTER TABLE #returnTable ADD '
SET #yearCounter = 1
WHILE #yearCounter <= #yearCount
BEGIN
SELECT #yearValue = rpt_yr FROM #tempYearList WHERE id = #yearCounter
IF #yearCounter > 1
SET #Sql = #Sql + N', '
SET #sql = #sql + N' [' + convert(varchar(20), #yearValue) + N'] float'
SET #yearCounter = #yearCounter + 1
END
EXECUTE sp_executesql #sql
SELECT * FROM #returnTable
-- No need to drop the temporary tables but doing just in case
DROP TABLE #returnTable
END
GO
-- run these statements and you will see the second call with return the cached #returnTable
EXEC tempDBCachingCheck '2019,2020'
EXEC tempDBCachingCheck '2017,2018,2019,2020'
GO
-- Clear temp table cache and call in reverse order, then will hit an error
-- 'A severe error occurred on the current command. The results, if any, should be discarded.'
USE tempDB
GO
DBCC FREEPROCCACHE
GO
EXEC tempDBCachingCheck '2017,2018,2019,2020'
EXEC tempDBCachingCheck '2019,2020'
GO
It seems this has been fixed in one of cummulative update. The description seems to match:
KB4538853:
When you repeatedly run a stored procedure that uses temporary table with indexes on SQL Server 2019, the client may receive an unexpected error with message "A severe error occurred on the current command" and an access violation exception is recorded on the SQL Server. If the same workload is executed on any previous major version of SQL Server, this issue does not occur.
Dan Guzman's recommendation to install newest CU is the way to go.
Using: EXEC tempDBCachingCheck '2017,2018,2019,2020' WITH RECOMPILE could help as well.
Related
I have a stored procedure with dynamic sql that i have embedded as below:
delete from #temp_table
begin tran
set #sql = 'select * into #temp_table from sometable'
exec (#sql)
commit tran
begin
set #sql = 'alter table #temp_table add column1 float'
exec(#sql)
end
update #temp_table
set column1 = column1*100
select *
into Primary_Table
from #temp_table
However, I noticed that all the statements work but the alter does not. When run the procedure, I get an error message: "Invalid Column name column1"
What am I doing wrong here?
EDIT: Realized I didn't mention that the first insert is a dynamic sql as well. Updated it.
Alternate approach tried but throws same error:
delete from #temp_table
begin tran
set #sql = 'select * into #temp_table from sometable'
exec (#sql)
commit tran
alter table #temp_table add column1 float
update #temp_table set column1 = column1*100
Local temporary tables exhibit something like dynamic scope. When you create a local temporary table inside a call to exec it goes out of scope and existence on the return from exec.
EXEC (N'create table #x (c int)')
GO
SELECT * FROM #x
Msg 208, Level 16, State 0, Line 4
Invalid object name '#x'.
The select is parsed after the dynamic SQL to create #x is ran. But #x is not there because dropped on exit from exec.
Update
Depending on the situation there are different ways to work around the issue.
Put everything into the same string:
DECLARE #Sql NVARCHAR(MAX) = N'SELECT 1 AS source INTO #table_name;
ALTER TABLE #table_name ADD TARGET float;
UPDATE #table_name SET Target = 100 * source;';
EXEC (#Sql);
Create the table ahead of the dynamic sql that populates it.
CREATE TABLE #table_name (source INT);
EXEC (N'insert into #table_name (source) select 1;');
ALTER TABLE #table_name ADD target FLOAT;
UPDATE #table_name SET target = 100 * source;
In this option, the alter table statement can be removed by adding the additional column to the create table statement.' Note also that the alter table and update statements could be in separate invocations of dynamic SQL, if that was beneficial to your context.
1) It should be ALTER TABLE #temp... Not ALTER #temp.
2) Even if #1 weren't an issue, you're adding column1, as a NULLable column with no default value and, in the next statement setting it's value to itself * 100...
NULL * 100 = NULL
3) Why are you using dynamic sql to alter the #temp table? It can just as easily be done with a regular ALTER TABLE script... or, better yet, can be included in the original table definition.
This is because the #temp_table reference in the outer batch is a different temp table than the one created in dynamic SQL. Consider:
use tempdb
drop table if exists sometable
drop table if exists #temp_table
go
create table sometable(id int, a int)
create table #temp_table(id int, b int)
exec( 'select * into #temp_table from sometable; select * from #temp_table;' )
select * from #temp_table
Outputs
id a
----------- -----------
(0 rows affected)
id b
----------- -----------
(0 rows affected)
A temp table created in a nested batch is scoped to the nested batch and automatically dropped after. A "nested batch" is either a dynamic SQL query or a stored procedure. This behavior is explained here CREATE TABLE, but it only mentions stored procedures. Dynamic SQL behaves the same.
If you create the temp table in a top level batch, you can access it in dynamic SQL, you just can't create a new temp table in dynamic SQL and see it in the outer batch or in subsequent same-level dynamic SQL. So try to use INSERT INTO instead of SELECT INTO.
The following SQL script produces an Invalid object name '#temp' exception on SQL Server Profile, but neither SQL Server Management Studio nor sqlcmd raise the exception:
create table #temp (id int)
insert into #temp (id) values (1)
I only caught it by running SQL Server Profiler with the event "Exception" turned on, which can be set on the "Events Selection" tab when configuring the trace properties.
Since exceptions tend to slow down the server, I tried a similar code using a table variable:
declare #temp table (id int)
insert into #temp (id) values (1)
The code above not only avoid the exception, but is also faster when calling it repeatedly, which comproves the performance penalty by using a temporary table:
if (db_id('performance_test') is null)
create database performance_test
go
use performance_test
go
/* --------------------------- */
/* stress test with temp table */
/* --------------------------- */
declare
#i int,
#sql varchar(max),
#start_time datetime,
#end_time datetime
set #i = 0
set #sql = 'create table #temp (id int)' + Char(13) + Char(10) + 'insert into #temp (id) values (1)'
set #start_time = getdate()
while (#i < 10000)
begin
exec (#sql)
set #i = #i + 1
end
set #end_time = getdate()
select [Elapsed milliseconds] = datediff(millisecond, #start_time, #end_time) -- outputs 17090 milliseconds
go
/* ------------------------------- */
/* stress test with table variable */
/* ------------------------------- */
declare
#i int,
#sql varchar(max),
#start_time datetime,
#end_time datetime
set #i = 0
set #sql = 'declare #temp table (id int)' + char(13) + char(10) + 'insert into #temp (id) values (1)'
set #start_time = getdate()
while (#i < 10000)
begin
exec (#sql)
set #i = #i + 1
end
set #end_time = getdate()
select [Elapsed milliseconds] = datediff(millisecond, #start_time, #end_time) -- outputs 10010 milliseconds
I often read that a local temporary table and a table variable can be used interchangeably (if using a single batch, of course), however I think the demonstrated behavior above can prove otherwise.
Although it's kinda obvious, it's worth noting that the exception is not raised if we separate the create table from insert into in different batches:
create table #temp (id int)
go
insert into #temp (id) values (1)
Is this silent exception a SQL Server's bug or is it something that could be called "a feature by design"? Maybe it's simply better to always use table variables instead of temporary tables, given the silent exception above.
P.S.: I've tested on both SQL Server 2014 and SQL Server 2016 Developer editions, getting the same results.
As pointed out by #JeroenMostert, the exception "Invalid object name" is probably resolved in the batch recompilation (which I didn't know about). It makes perfect sense considering the "Deferred name Resolution" process, which is a known subject in the SQL Server community.
The first link below is a question I've posted on MSDN and was answered by Mohsin_A_Khan, talking about "Deferred name Resolution" process in SQL Server. The other two links contributes to understand how it actually works:
Getting "Invalid object name" by creating a temp table and inserting rows right away
How to find what caused errors reported in a SQL Server profiler trace?
Deferred Name Resolution and Compilation
Since the "Invalid object name" is expected due to the recompilation process and shouldn't be avoided by simply replacing the temporary table with a table variable (again, as pointed out by #JeroenMostert), I consider this question answered.
SSMS: 2008 R2
We are having our software system updated, which may contain an unknown number of undocumented changes to the way data is entered and stored in our database. We have asked for documentation, but only have schema compares for "physical" changes to the database, not the way the data is treated. They may change in the future, but for now we have to assume not.
In order to check that our stored procedures work as expected after the update, we would like to run a sample of procedures using a sample of parameters before and after the update to compare the actual data results. The stored procedures here all take a single Id as the parameter (they are used to make SSRS reports within the software system)
I have set some things up, but I am having problems with my approach and would welcome any suggestions about either a better way to do things, or how to fix my approach. The problem is that an error is returned whenever a called stored procedure uses a temporary table. Here is what I have done:
Made a script to get a random sample of Ids for paramaters (only one table used at the moment - that's fine).
ALTER PROC [dbo].[UpdateValidation_GET_RandomIdSample](#TestSizePercent DECIMAL(6,3))
AS
-- This table is already created and will persist both sides of the update
--CREATE TABLE Live_Companion.dbo.UpdateValidationIds
--( Id INT IDENTITY(1,1)NOT NULL PRIMARY KEY CLUSTERED
-- ,MyTableId NT NULL)
IF #TestSizePercent > 100 RAISERROR('Do you even percent, bro?',16,1)
DECLARE #SQL VARCHAR(255)
TRUNCATE TABLE UpdateValidationIds
SET #SQL =
'INSERT dbo.UpdateValidationIds(Id)
SELECT TOP ' + CONVERT(VARCHAR(10),#TestSizePercent) + ' PERCENT ID FROM Live.dbo.MyTable ORDER BY NEWID()'
EXEC (#SQL)
Made a second script to run a stored procedure for each Id in the table:
ALTER PROC [dbo].[UpdateValidation_GET_ProcedureResultsManyTimes](#Procedure_Name VARCHAR(255))
AS
--DECLARE #Procedure_Name VARCHAR(255) = 'Live_Companion.dbo.MyProc'
DECLARE #ID INT
DECLARE #GET_ID CURSOR
DECLARE #SQL VARCHAR(MAX) = ''
DECLARE #MyTableId INT
DECLARE #FirstRun BIT = 1
SET #GET_ID = CURSOR FOR
SELECT Id FROM Live_Companion.dbo.UpdateValidationIds
WHERE MyTableId IS NOT NULL
OPEN #GET_ID
FETCH NEXT FROM #GET_ID INTO #ID
WHILE ##FETCH_STATUS = 0
BEGIN
SELECT #MyTableId = MyTableId FROM Live_Companion.dbo.UpdateValidationIds
WHERE Id = #ID
IF #FirstRun = 1
BEGIN
SET #SQL = 'SELECT * INTO #ProcedureOutput FROM OPENROWSET(''SQLNCLI'',''Server=SQL1;Trusted_Connection=yes;'',''EXEC ' + #Procedure_Name + ' ' + CONVERT(VARCHAR(50),#MyTableId) + ''');'
SET #FirstRun = 0
END
ELSE
BEGIN
SET #SQL = #SQL + '
INSERT #ProcedureOutput SELECT * FROM OPENROWSET(''SQLNCLI'',''Server=SQL1;Trusted_Connection=yes;'',''EXEC ' + #Procedure_Name + ' ' + CONVERT(VARCHAR(50),#MyTableId) + ''');'
END
FETCH NEXT FROM #GET_ID INTO #ID
END
SET #SQL = #SQL + '
SELECT * FROM #ProcedureOutput
DROP TABLE #ProcedureOutput'
EXEC (#SQL)
CLOSE #GET_ID
DEALLOCATE #GET_ID
So now I should be able to execute the second procedure for various stored procedures and output the results to file over a range of Ids, then repeat using the saved (initially random) Ids again after the update and compare the results.
The trouble is, it fails when any of the called procedures use a temporary table:
EDIT:
Error Message returned:
Cannot process the object "EXEC Live_Companion.dbo.MyProc 12345". The
OLE DB provider "SQLNCLI10" for linked server "(null)" indicates that
either the object has no columns or the current user does not have
permissions on that object.
Any suggestions or ideas for how to proceed?
I have to create a stored procedure where I will pass tableName, columnName, id as parameters. The task is to select records from the passed table where columnName has passed id. If record is found update records with some fixed data. Also implement Transaction so that we can rollback in case of any error.
There are hundreds of table in database and each table has different schema that is why I have to pass columnName.
Don't know what is the best approach for this. I am trying select records into a temp table so that I can manipulate it as per requirement but its not working.
I am using this code:
ALTER PROCEDURE [dbo].[GetRecordsFromTable]
#tblName nvarchar(128),
#keyCol varchar(100),
#key int = 0
AS
BEGIN
SET NOCOUNT ON;
BEGIN TRY
--DROP TABLE #TempTable;
DECLARE #sqlQuery nvarchar(4000);
SET #sqlQuery = 'SELECT * FROM ' + #tblName + ' WHERE ' + #keyCol + ' = 2';
PRINT #sqlQuery;
INSERT INTO #TempTable
EXEC sp_executesql #sqlQuery,
N'#keyCol varchar(100), #key int', #keyCol, #key;
SELECT * FROM #TempTable;
END TRY
BEGIN CATCH
EXECUTE [dbo].[uspPrintError];
END CATCH;
END
I get an error
Invalid object name '#TempTable'
Also not sure if this is the best approach to get data and then update it.
If you absolutely must make that work then I think you'll have to use a global temp table. You'll need to see if it exists before running your dynamic sql and clean up. With a fixed table name you'll run into problems with other connections. Inside the dynamic sql you'll add select * into ##temptable from .... Actually I'm not even sure why you want the temp table in the first place. Can't the dynamic sql just return the results?
On the surface it seems like a solid idea to have one generic procedure for returning data with a couple of parameters to drive it but, without a lot of explanation, it's just not the way database are designed to work.
You should create the temp table.
IF OBJECT_ID('tempdb..##TempTable') IS NOT NULL
DROP TABLE ##TempTable
CREATE TABLE ##TempTable()
The following code generates the primaey key for the new record to be inserted and inserts the record into a table, whose name and the values to be inserted are given as parameters to the stored procedure. I am getting a runtime error. I am using Visual Studio 2005 to work with SQL Server 2005 Express Edition
ALTER PROCEDURE spGenericInsert
(
#insValueStr nvarchar(300),
#tblName nvarchar(10)
)
AS
DECLARE #sql nvarchar(400)
DECLARE #params nvarchar(200)
DECLARE #insPrimaryKey nvarchar(10)
DECLARE #rowCountVal integer
DECLARE #prefix nvarchar(5)
--following gets the rowcount of the table--
SELECT #rowCountVal = ISNULL(SUM(spart.rows), 0)
FROM sys.partitions spart
WHERE spart.object_id = object_id(#tblName) AND spart.index_id < 2
SET #rowCountVal = #rowCountVal+1
--Following Creates the Primary Key--
IF #tblName = 'DEFECT_LOG'
SET #prefix='DEF_'
ELSE IF #tblName='INV_Allocation_DB'
SET #prefix='INV_'
ELSE IF #tblName='REQ_Master_DB'
SET #prefix='REQ_'
ELSE IF #tblName='SW_Master_DB'
SET #prefix='SWI_'
ELSE IF #tblName='HW_Master_DB'
SET #prefix='HWI_'
SET #insPrimaryKey= #prefix + RIGHT(replicate('0',5)+ convert(varchar(5),#rowCountVal),5) -- returns somethin like 'DEF_00005'
-- Following is for inserting into the table --
SELECT #sql = N' INSERT INTO #tableName VALUES ' +
N' ( #PrimaryKey , #ValueStr )'
SELECT #params = N'#tableName nvarchar(10), ' +
N'#PrimaryKey nvarchar(10), ' +
N'#ValueStr nvarchar(300)'
EXEC sp_executesql #sql, #params, #tableName=#tblName, #PrimaryKey=#insPrimaryKey, #ValueStr=#insValueStr
Output Message:
Running [dbo].[spGenericInsert] ( #insValueStr = 2,"Hi",1/1/1987, #tblName = DEFECT_LOG ).
Must declare the table variable "#tableName".
No rows affected.
(0 row(s) returned)
#RETURN_VALUE = 0
Finished running [dbo].[spGenericInsert].
You are going to have to concatenate the table name directly into the string, as this cannot be parameterized:
SELECT #sql = N' INSERT INTO [' + #tblName + '] VALUES ' +
N' ( #PrimaryKey , #ValueStr )'
SELECT #params = N'#PrimaryKey nvarchar(10), ' +
N'#ValueStr nvarchar(300)'
To prevent injection attacks, you should white-list this table name. This also isn't robust if the table has other non-nullable columns, etc.
note: Personally, though, I don't think this is a good use of TSQL; it might be more appropriate to construct the command in the client (C# or whatever), and execute it as a parameterized command. There are use-cases for dynamic-SQL, but I'm not sure this is a good example of one.
Better yet, use your preferred ORM tool (LINQ-to-SQL, NHibernate, LLBLGen, Entity Framework, etc) to do all this for you, and concentrate on your actual problem domain.
White list essentially means make sure that the table being passed in is a valid table that you want them to be able to insert into. Let's just say for arguments sake that table name is user provided, the user could then start inserting records into system tables.
You can do a white list check by bouncing the table name of the sysobjects table:
select * from sysobjects where name=#tblname and xType='U'
However as Marc suggested this is not a good use of TSQL, and your better off handling this in the app tier as a paramatized query.
Agree with Marc- overall this is an extremely poor idea. Generic inserts/updates or deletes cause problems for the database eventually.
Another point is that this process will have problems when two users run simulutaneously against the same table as they will try to insert the same Primary Key.