Stored procedures eating CPU SQL Server 2005 - sql-server

I am trying to resolve a 100% cpu usage by the SQL Server process on the database server. While investigating, I have now come across the fact that the stored procedures are taking the most worker time.
For the following query of dmv's to find queries taking highest time,
SELECT TOP 20 st.text
,st.dbid
,st.objectid
,qs.total_worker_time
,qs.last_worker_time
,qp.query_plan
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
ORDER BY qs.total_worker_time DESC
most of them are stored procedures. The weird thing is that all these stored procedures are querying different tables. And yet they are the top of taking the most worker time, even though when I look at the Profiler for queries with top CPU, Reads, Duration, the stored procedures don't figure at the top there.
Why could this be happening?
==Edit==
The application actually uses adhoc queries more than stored procedures. Some of these procedures are to be migrated to using adhoc queries. The thing is that these procedures are not called as often as some of the other queries, which are cpu intensive, and are called very frequently.
Also, it does strike me odd that a stored procedure with which does a simple select a,b,c from tbl where id=#id would have a higher total worker time than a query which has mulitple joins, user defined functions in the where clause, a sort and a row_number over and while the simple one queries a table with 20000 records, the complex query is on a table with over 200,000 records.

It depends what are you doing in that stored procedure, how you are tuning your tables, indexes, etc.
For example if you create a stored procedure using loops like cursor you will have at maximum your cpu usage. If you don't setup your indexes and you are using select using different joins,etc. You'll have your cpu overloaded.
It is accordingly of how you use your database.
Some tips:
I recommend create stored procedure most of the time.
When you create your stored procedures, run with execution plan to get some suggestions.
If the tables have a lot of records (more than 1 or 2 millions), think about create index views.
Do update or delete only when is necessary, sometimes is better only insert records and run a job daily or weekly to update or remove the records that you don't need. (it depends of the case)

Related

How to track which query in the corresponding stored procedure is executing

Our team has assigned a new task to me to look into the performance of a huge stored procedure. When I observe in the development environment with the proper data I have noticed that the procedure takes a considerable time to execute. In my scenario it took around 45 minutes. There are multiple INSERT/DELETE/UPDATE queries are used in the stored procedure. But I am unable to get which query is causing the issue. The data volume which is used in the queries are also pretty less.
How can I pinpoint to the exact query in the stored procedure
which is getting executed?
My Server version is SQL Server 2008 R2.
There are couple of ways to find out the query which is executing as part of a stored procedure. Learn more about DMVs and SQL Profiler. Both will give you an oversight to pinpoint the queries that are being used in a stored procedure.
In SQL Profiler, use the SP:StmtCompleted or SP:StmtStarting to include the statements inside the query. But I would advice not to go with the Profiler as it affects the system memory as well. Also it may give you unwanted additional information as well.
The best way is to use DMV (Dynamic Management Views). If you know the process id (SPID) use the below query to find out the query
The first query will give you the details about the stored procedures and the second will give you the exact query which is currently executing. Replace the SPID in the below query with the corresponding SPID of your process.
SELECT requests.session_id,
requests.status,
requests.command,
requests.statement_start_offset,
requests.statement_end_offset,
requests.total_elapsed_time,
details.text
FROM sys.dm_exec_requests requests
CROSS APPLY sys.dm_exec_sql_text (requests.sql_handle) details
WHERE requests.session_id = SPID
ORDER BY total_elapsed_time DESC
SELECT SUBSTRING(detail.text,
requests.statement_start_offset / 2,
(requests.statement_end_offset - requests.statement_start_offset) / 2)
FROM sys.dm_exec_requests requests
CROSS APPLY sys.dm_exec_sql_text (requests.sql_handle) detail
WHERE requests.session_id = SPID
Once you have done identifying the queries which is causing slowness, you can use Actual execution plan to identify the issues in the query.
Try this and please comment whether it is working for you.

Is recompiling a long running query a good habit

I have some long running (a few hours) stored procedures which contain queries that goes to tables that contain millions of records in a distributed environment. These stored procedures take a date parameter and filters these tables according to that date parameter.
I've been thinking that because of the parameter sniffing feature of SQL Server, at the first time that my stored procedure gets called, the query execution plan will be cached according to that specific date and any future calls will use that exact plan. And I think that since creating an execution plan takes only a few seconds, why would I not use RECOMPILE option in my long running queries, right? Does it have any cons that I have missed?
if the query should run within your acceptable performance limits and you suspect parameter sniffing is the cause,i suggest you add recompile hint to the query..
Also if the query is part of stored proc,instead of recompiling the entire proc,you can also do a statement level recompilation like
create proc procname
(
#a int
)
as
select * from table where a=#a
option(recompile)
--no recompile here
select * from table t1
join
t2 on t1.id=t2.id
end
Also to remind ,recompiling query will cost you.But to quote from Paul White
There is a price to pay for the plan compilation on every execution, but the improved plan quality often repays this cost many times over.
Query store in 2016 helps you in tracking this issues and also stores plans for the queries over time..you will be able to see which are performing worse..
if you are not on 2016,William Durkin have developed open query store for versions (2008-2014) which works more or less the same and helps you in troubleshootng issues
Further reading:
Parameter Sniffing, Embedding, and the RECOMPILE Options

Select statement for over 500k records

I'm using this SELECT Statment
SELECT ID, Code, ParentID,...
FROM myTable WITH (NOLOCK)
WHERE ParentID = 0x0
This Statment is repeated each 15min (Through Windows Service)
The problem is the database become slow to other users when this query is runnig.
What is the best way to avoid slow performance while query is running?
Generate an execution plan for your query and inspect it.
Is the ParentId field indexed?
Are there other ways you might optimize the query?
Is it possible to increase the performance of the server that is hosting SQL Server?
Does it need more disk or RAM?
Do you have separate drives (spindles) for operating system, data, transaction logs, temporary databases?
Something else to consider - must you always retrieve the very latest values from this table for your application, or might it be possible to cache prior results and use those for some length of time?
Seems your table got huge number of records. You can think of implementing page-wise retrieval of data. You can first request for say TOP 100 rows and then having multiple calls to fetch rest of data.
I still don't understand need to run such query every 15 mins. You may think of implementing a stored procedure which can perform majority of processing and return you a small subset of data. This will be good improvement if it suits your requirement.

Why do we see large swings in Stored Procedure Performance

I'm working on a Stored Procedure where we're seeing high variance in performance. All the procedure does is read from a set of tables.
The issue is that it often starts off with <1s performance. Then, later in the week, the same queries run in >7s. We'd like to keep this <1s. The issue may be related to Parameter Sniffing but recompile statements do not positively OR negatively influence performance in this case. We've also reproduced this behavior on 2 independent systems of identical hardware.
Here's what we know:
3 of the 8 tables used have a high update/insert rate but with small amounts of data. Covering indexes are in place in every place where a table is touched. Most tables are very small and fast-access.
Queries have require similar plan behavior: Look up 1 project and read it's 12-month summary at 1 row/month. Then pivot all 12 months of cost, rev, and budgets. (Each measure gets 12 columns so 36 columns outputed)
Here's what we've tried to no avail:
Query hints
(ForceSeek)
Run queries (with recompile) or forced all queries against the Stored Procedure with recompile
CREATE PROCEDURE usp_ProjectSummary #ProjectID INT
WITH RECOMPILE
Specified unknown variables using hints
Here's what's helped and where we have landed so far:
Temp tables
Force Fixed Plan with
OPTION (KEEPFIXED PLAN)
Routinely run the following 2 statements to force a good query plan:
EXEC sp_recompile usp_ProjectSummary
EXEC usp_ProjectSummary #ProjectID

How to know most used procedure in my database?

I completed my project, there are many procedures used and now I got the job to find the mostly used procedure and there average execution time.
In this way I know what are the procedures I need to tune first?
It there any way to get procedure execution history for a particular database?
I believe you can use the sys.dm_exec_query_stats dynamic management view. There are two columns in this view called execution_count, and total_worker_time that will help you.
execution_count gives the total number of times the stored procedure in question was executed since the last time it was recompiled.
total_worker_time gives the total CPU time in milliseconds that was spent executing this stored procedure since the last time it was recompiled.
Here is an MSDN link:
http://msdn.microsoft.com/en-us/library/ms189741.aspx
You can use dm_exec_cached_plans to look for the stored procedures that have been compiled into query plans. The function dm_exec_query_plan can be used to retrieve the object id for a plan, which in turn can be translated into the procedure's name:
select object_name(qp.objectid)
, cp.usecounts
from sys.dm_exec_cached_plans cp
cross apply
sys.dm_exec_query_plan(cp.plan_handle) qp
where cp.objtype = 'Proc'
order by
cp.usecounts desc
I think you want to check SQL Server Profiler for this.
You can check the details in MSDN and other places as well.
But, before using it in production server, you need to keep in mind that:
Profiler adds too much overhead to production server. So, first check when your site has less no of hits, and go ahead.
This is what the SQL Server Profiler is for. With it you can keep track of query run count, execution time, etc.

Resources