I have set up a stored procedure tracker table on our databases with the hope of using it to flush out procedures that we no longer use. I set this up a few months ago, and am now ready to be able to start the cleansing. The tables utilises the sys.procedures and sys.dm_exec_procedure_stats DMVs in SQL Server 2008 R2, and a job updates the static table every 10 minutes, 24hours a day
I have been checking through my list of procedures, and have come across a couple that I know for a fact have run very recently. The particular one I have found runs as step 2 of a job, but the sys.dm_exec_procedure_stats doesn't seem to contain any record of it having been run, but the procedure in step 1 has appeared at the correct time. I have checked the job history, and both steps 1 and 2 ran without any problems.
The only difference I can see is that the procedure in step 2 comes up with a "Warning: Null value is eliminated by an aggregate or other SET operation" whereas step one doesn't. Does this make a difference as to whether or not the procedure will appear in the sys.dm_exec_procedure_stats?
Hope someone can help!
While the reason that it doesn't show up in the DMV is likely the reason specified in the linked/related answer mentioned by #bastos.sergio in a comment on the question, that still leaves the issue of "what can be done to find procs not being used?".
The accepted answer in that linked question (this is the question referenced by #bastos.sergio: Last Run Date on a Stored Procedure in SQL Server ) is missing something so I will add to it here:
The ONLY way to know what is calling it is:
scan all code (app code, other Stored Procs, Job Steps [in msdb.dbo.sysjobsteps], SSRS report definition files, etc.) for references
IF you allow ad hoc access (e.g. someone referenced a Stored Proc in an Access app [or any Microsoft Office "app"]) then you need to do some of the additional steps mentioned in the accepted answer of that linked question, namely:
Add a RAISERROR(N'Deprecated! Please contact YourName.', 16, 1); RETURN; at the top of the proc and leave it there for a month or two.
Add a table to log proc calls and an INSERT into that log table at the top of any of the supposed obsolete code and check once a week to see if anything shows up. If also doing the RAISERROR, put the INSERT prior to the RAISERROR(...); RETURN;.
With regards to ad hoc access (i.e. access outside of the code that you control), be careful to always keep in mind that infrequent access can be just that: infrequent. If there is a code path that is executed monthly, quarterly, bi-annually, annually, when some manager remembers to ask for such and such report, etc., then you could potentially remove valid code if you do not allow a long-enough time frame to capture "highly" infrequent usage (and this is why, even if the DMV data was more reliable, you would still need to be just as cautious).
Again, if all access is within code that you control, just scan your code (most likely using Regular Expressions).
EDIT:
To answer the specific question of:
Does the "Warning: Null value is eliminated by an aggregate or other SET operation" warning that the query, running in the Stored Proc that does not show up in the DMV, gets have something to do with why it is not showing up in the DMV?
do the following test:
CREATE PROCEDURE #NoWarning
AS
SELECT AVG(tmp.col)
FROM (
SELECT 1.0
UNION ALL
SELECT 2
) tmp(col);
GO
EXEC #NoWarning;
GO
CREATE PROCEDURE #Warning
AS
SELECT AVG(tmp.col)
FROM (
SELECT 1.0
UNION ALL
SELECT null
) tmp(col);
GO
EXEC #Warning;
And then run the following query and you should see both proc names appearing in "tempdb":
SELECT DB_NAME(ps.database_id) AS [DatabaseName],
OBJECT_NAME(ps.[object_id], ps.database_id) AS [ProcName],
*
FROM sys.dm_exec_procedure_stats ps
ORDER BY [DatabaseName], [ProcName];
Related
SET STATISTICS TIME statement is only useful while developing as with it one can performance tune additional statement being added to the query or UDF/SP being worked on. However when one has to performance tune existing code, e.g. a SP with hundreds or thousands of lines of code, the output of this statement is pretty totally useless as it is not clear which to which SQL-statement the recorded times belong to.
Isn't there any alternatives to SET STATISTICS TIME which also show the Statements to which the recorded times belong to?
I would recommend to use advanced tool. Here is example of one call of sp with all and every internal details. On the right you have different runs history which can be commented and analyzed later. All you need for stats/index usage/io/waits - everything available on different tabs. Util: SentryOne Plan Explorer (free).
If your Stored Procedures are granular then you could use this DMV to get an idea of times.
SELECT
DB_NAME(qs.database_id) AS DBName
,qs.database_id
,qs.object_id
,OBJECT_NAME(qs.object_id,qs.database_id) AS ObjectName
,qs.cached_time
,qs.last_execution_time
,qs.plan_handle
,qs.execution_count
,total_worker_time
,last_worker_time
,min_worker_time
,max_worker_time
,total_physical_reads
,last_physical_reads
,min_physical_reads
,max_physical_reads
,total_logical_writes
,last_logical_writes
,min_logical_writes
,max_logical_writes
,total_logical_reads
,last_logical_reads
,min_logical_reads
,max_logical_reads
,total_elapsed_time
,last_elapsed_time
,min_elapsed_time
,max_elapsed_time
FROM
sys.dm_exec_procedure_stats qs
I'd create an extended events session similar to the one below:
CREATE EVENT SESSION [proc_statments] ON SERVER
ADD EVENT sqlserver.module_end(
WHERE ([object_name]=N'usp_foobar')
),
ADD EVENT sqlserver.sp_statement_completed(
SET collect_object_name=(1),collect_statement=(1)
WHERE ([object_name]=N'usp_foobar'))
ADD TARGET package0.event_file(SET filename=N'proc_statments')
WITH (TRACK_CAUSALITY=ON)
GO
This tracks both stored procedure and stored procedure statement completion for a procedure called usp_foobar. Within the event itself, there's an identifier that helps you tie together which statements were executed as a result of having executed a specific procedure (that's what the TRACK_CAUSALITY is for).
While debugging I am unable to watch temp table's value in sql server 2012.I am getting all of my variables value and even can print that but struggling with the temp tables .Is there any way to watch temp table's value?.
SQL Server provides the concept of temporary table which helps the developer in a great way. These tables can be created at runtime and can do the all kinds of operations that one normal table can do. But, based on the table types, the scope is limited. These tables are created inside tempdb database.
While debugging, you can pause the SP at some point, write the select statement in your SP before the DROP table statement, the # table is available for querying.
select * from #temp
I placed this code inside my stored procedure and I am able to see the temp table contents inside the "Locals" window.
INSERT INTO #temptable (columns) SELECT columns FROM sometable; -- populate your temp table
-- for debugging, comment in production
DECLARE #temptable XML = (SELECT * FROM #temptable FOR XML AUTO); -- now view #temptable in Locals window
This works on older SQL Server 2008 but newer versions would probably support a friendlier FOR JSON object. Credit: https://stackoverflow.com/a/6748570/1129926
I know this is old, I've been trying to make this work also where I can view temp table data as I debug my stored procedure. So far nothing works.
I've seen many links to methods on how to the do this, but ultimately they don't work the way a developer would want them to work. For example: suppose one has several processes in the Stored Procedure that updates and modifies data in the same temp table, there is no way to see update on the fly for each process in the SP.
This is a VERY common request, yet no one seems to have a solution other than don't use Stored Procedures for complex processing due how difficult they are to debug. If you're a .NET Core/EF 6 developer and have the correct PK,FK set for the database, one shouldn't really need to use Stored Procedures at all as it can all be handled by EF6 and debug code to view data results in your entities/models directly (usually in web API using models/entities).
Trying to retrieve the data from the tempdb is not possible even with the same connection (as has been suggested).
What is sometimes used is:
PRINT '#temptablename'
SELECT * FROM #temptablename
Dotted thruout the code, you can add a debug flag to the SP and selectively debug the output. NOT ideal at all, but works for many situations.
But this MUST already be in the Stored Procedure before execution (not during). And you must remember to remove the code prior to deployment to a production environment.
I'm surprised in 2022, we still have no solution to this other than don't use complex stored procedures or use .NET Core/EF 6 ... which in my humble opinion is the best approach for 2022 since SSMS and other tools like dbForge and RedGate can't accomplish this either.
I am trying to hunt down a certain stored procedure which writes to certain table (it needs to be changed) however going through every single stored procedure is not a route I really want to take. So I was hoping there might be a way to find out which stored procedures INSERT or UPDATE certain table.
I have tried using this method (pinal_daves_blog), but it is not giving me any results.
NOTICE: The stored procedure might not be in the same DB!
Is there another way or can I somehow check what procedure/function has made the last insert or update to table.
One brute-force method would be to download an add-in from RedGate called SQL Search (free), then do a stored procedure search for the table name. I'm not affiliated at all with RedGate or anything, this is just a method that I have used to find similar things and has served me well.
http://www.red-gate.com/products/sql-development/sql-search/
If you go this route, you just type in the table name, change the 'object types' ddl selection to 'Procedures' and select 'All databases' in the DB ddl.
Hope this helps! I know it isn't the most technical solution, but it should work.
There is no built-in way to tell what function, procedure, or executed batch has made the last change to a table. There just isn't. Some databases have this as part of their transaction logging but SQL Server isn't one of them.
I have wondered in the past whether transactional replication might provide that information, if you already have that set up, but I don't know whether that's true.
If you know the change has to be taking place in a stored procedure (as opposed to someone using SSMS or executing lines of SQL via ADO.NET), then #koppinjo's suggestion is a good one, as is this one from Pinal Dave's blog:
USE AdventureWorks
GO
--Searching for Empoloyee table
SELECT Name
FROM sys.procedures
WHERE OBJECT_DEFINITION(OBJECT_ID) LIKE '%Employee%'
There are also dependency functions, though they can be outdated or incomplete:
select * from sys.dm_sql_referencing_entities( 'dbo.Employee', 'object' )
You could run a trace in Profiler. The procedure would have to write to the table while the trace is running for you to catch it.
About 5 times a year one of our most critical tables has a specific column where all the values are replaced with NULL. We have run log explorers against this and we cannot see any login/hostname populated with the update, we can just see that the records were changed. We have searched all of our sprocs, functions, etc. for any update statement that touches this table on all databases on our server. The table does have a foreign key constraint on this column. It is an integer value that is established during an update, but the update is identity key specific. There is also an index on this field. Any suggestions on what could be causing this outside of a t-sql update statement?
I would start by denying any client side dynamic SQL if at all possible. It is much easier to audit stored procedures to make sure they execute the correct sql including a proper where clause. Unless your sql server is terribly broken, they only way data is updated is because of the sql you are running against it.
All stored procs, scripts, etc. should be audited before being allowed to run.
If you don't have the mojo to enforce no dynamic client sql, add application logging that captures each client sql before it is executed. Personally, I would have the logging routine throw an exception (after logging it) when a where clause is missing, but at a minimum, you should be able to figure out where data gets blown out next time by reviewing the log. Make sure your log captures enough information that you can trace it back to the exact source. Assign a unique "name" to each possible dynamic sql statement executed, e.g., each assign a 3 char code to each program, and then number each possible call 1..nn in your program so you can tell which call blew up your data at "abc123" as well as the exact sql that was defective.
ADDED COMMENT
Thought of this later. You might be able to add / modify the update trigger on the sql table to look at the number of rows update prevent the update if the number of rows exceeds a threshhold that makes sense for your. So, did a little searching and found someone wrote an article on this already as in this snippet
CREATE TRIGGER [Purchasing].[uPreventWholeUpdate]
ON [Purchasing].[VendorContact]
FOR UPDATE AS
BEGIN
DECLARE #Count int
SET #Count = ##ROWCOUNT;
IF #Count >= (SELECT SUM(row_count)
FROM sys.dm_db_partition_stats
WHERE OBJECT_ID = OBJECT_ID('Purchasing.VendorContact' )
AND index_id = 1)
BEGIN
RAISERROR('Cannot update all rows',16,1)
ROLLBACK TRANSACTION
RETURN;
END
END
Though this is not really the right fix, if you log this appropriately, I bet you can figure out what tried to screw up your data and fix it.
Best of luck
Transaction log explorer should be able to see who executed command, when, and how specifically command looks like.
Which log explorer do you use? If you are using ApexSQL Log you need to enable connection monitor feature in order to capture additional login details.
This might be like using a sledgehammer to drive in a thumb tack, but have you considered using SQL Server Auditing (provided you are using SQL Server Enterprise 2008 or greater)?
I have a report that renders data returned from a stored procedure. Using profiler I can catch the call to the stored procedure from the reporting services.
The report fails stating the report timed out yet I can execute the stored procedure from SSMS and it returns the data back in five to six seconds.
Note, in the example test run only two rows are returned to the report for rendering though within the stored procedure it may have been working over thousands or even millions of records in order to collate the result passed back to reporting services.
I know the stored procedure could be optimised more but I do not understand why SSRS would be timing out when the execution only seems to take a few seconds to execute from SSMS.
Also another issue has surfaced. If I recreate the stored procedure, the report starts to render perfectly fine again. That is fine except after a short period of time, the report starts timing out again.
The return of the time out seems to be related to new data being added into the main table the report is running against. In the example I was testing, just one hundred new records being inserted was enough to screw up the report.
I imagine more correctly its not the report that is the root cause. It is the stored procedure that is causing the time out when executed from SSRS.
Once it is timeing out again, I best fix I have so far is to recreate the stored procedure. This doesn't seem to be an ideal solution.
The problem also only seems to be occuring on our production environment. Our test and development platforms do not seem to be exhibiting the same problem. Though dev and test do not have the same volume of records as production.
The problem, as you described it, seems to come from variations on the execution plan of some parts in your stored procedure. Look at what statistics are kept on the tables used and how adding new rows affect them.
If you're adding a lot of rows at the
end of the range of a column (think
about adding autonumbers, or
timestamps), the histogram for that
column will become outdated rapidly.
You can force an immediate update from
T-SQL by executing the UPDATE
STATISTICS statement.
I have also had this issue where the SPROC takes seconds to run yet SSRS simply times out.
I have found from my own experience that there are a couple of different methods to overcome this issue.
Is parameter sniffing! When your stored procedure is executed from SSRS it will "sniff" out your parameters to see how your SPROC is using them. SQL Server will then produce an execution plan based on its findings. This is good the first time you execute your SPROC, but you don't want it to be doing this every time you run your report. So I declare a new set of variables at the top of my SPROC's which simply store the parameters passed in the query and use these new parameters throughout the query.
Example:
CREATE PROCEDURE [dbo].[usp_REPORT_ITD001]
#StartDate DATETIME,
#EndDate DATETIME,
#ReportTab INT
AS
-- Deter parameter sniffing
DECLARE #snf_StartDate DATETIME = #StartDate
DECLARE #snf_EndDate DATETIME = #EndDate
DECLARE #snf_ReportTab INT = #ReportTab
...this means that when your SPORC is executed by SSRS it is only looking at the first few rows in your query for the passed parameters rather than the whole of your query. Which cuts down execution time considerably in SSRS.
If your SPROC has a lot of temp tables that are declared as variables (DECLARE #MyTable AS TABLE), these are really intensive on the server (In terms of memory) when generating reports. By using hash temp tables (SELECT MyCol1, MyCol2 INTO #MyTable) instead, SQL Server will store your temp tables in TempDB on the server rather than in system memeory, making the report generation less intensive.
sometime adding WITH RECOMPILE option to the CREATE statement of stored procedure helps.
This is effective in situations when the number of records explored by the procedure changes in the way that the original execution plan is not optimal.
Basically all I've done so far was to optimise the sproc a bit more and it seems to at least temporarily solve the problem.
I would still like to know what the difference is between calling the sproc from SSMS and SSRS.