SQL Server a lot of Single use objects - sql-server

We have a database with about 50-60 % recompiles. That value comes from [SQL Compilations/sec] coupled with [Batch Requests/sec].
We Think that that value is a bit high
If we look at this query:
SELECT TOP 150
qs.plan_generation_num,
qs.execution_count,
qs.statement_start_offset,
qs.statement_end_offset,
st.text
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st
WHERE qs.plan_generation_num > 1
ORDER BY plan_generation_num DESC
We don't have a lot of plan_generation_num if you compare it to execution count.
Wat we do have is a lot of single use objects and I am trying to figure out why?
Our application is built in ASP.NET and we always use parameterized querys. We use both SP's and SQL-statements in the application but always parameterized.
The webpage that runs agains this database is a pretty big website with about 500 000 pageviews each day and about 10 000 request per minute if this information helps.
We have no long running Querys and indexes and statisics are in order. This is one of the last things to optimize.
CPU is average 15%
ram is about 100 gb and of coursed used up by SQL-server.
We use SQL Server 2014 Enterprise.
One thing I started wondering about. If I have a sql statement like this
SELECT doors, windows, seats from cars where Wheels = #Wheels AND Active = 1
Will this plan not be reused beacause we don't set a parameter on this part: **AND Active = 1
**
Any idea on how to get an idea on why we have so much single use?
The Count on cached plans is about 20 000. In comparasion we have about 700 sp' and a lot more querys in the app.

Related

SQL Server execution plan for diffierent Top count

I'm working on an API that expose data of a legacy view in SQL Server and met performance issue, the response is extremely slow. This view has 10+ joined tables and kind of a complex where condition. The total number of rows of the view is approx. 7000.
After test I found that the execution time is related to top count.
If top is 15, it takes ~10s
If top is 50, it takes ~500ms
With binary search, I found the boundary is 30
Top 15 and top 50 have different execution plans. Finally I add option(recompile) to make average response about 800ms, which is still slow but acceptable.
My question is why this happens? Is there any way to let SQL Server choose the faster execution plan for top 15 without option(recompile)?

SQL Server report Performance - Top Queries by Total CPU Time - what's included

I'm trying to assess load level on sql server 2008/2016 by insert query.
There are articles I found which discuss that, like:
http://use-the-index-luke.com/sql/dml/insert
which talks about execution time.
I'm not very proficient in sql server, e.g. don't know how to evaluate execution plans.
I know that are handy performance reports, like "Performance - Top Queries by Total CPU Time".
I've searched and not found definitions of those reports.
So question is - which server tasks does this report include in CPU time calculations of queries, i.e.
indexes recalculation?
maybe even executing of triggers?
something else?
Thank you!
These are MDW or Management Data Warehouse reports and in particular the Query Statistics History introduced in SQL Server 2008. If you are interested in collecting this data then enable and Configure the Management Data Warehouse.
What are these reports anyway.
By default, only the top 10 queries will be included in the Top 10 Queries by CPU, however, you can emulate the query behind the report and tweak the desired outcome using a query similar to the one below as discussed in this article.
SELECT TOP X
qs.total_worker_time/(qs.execution_count*60000000) as [Minutes Avg CPU Time],
qs.execution_count as [Times Run],
qs.min_worker_time/60000000 as [CPU Time in Mins],
SUBSTRING(qt.text,qs.statement_start_offset/2,
(case when qs.statement_end_offset = -1 then len(convert(nvarchar(max), qt.text)) * 2
else qs.statement_end_offset end -qs.statement_start_offset)/2) as [Query Text],
db_name(qt.dbid) as [Database],
object_name(qt.objectid) as [Object Name]
FROM sys.dm_exec_query_stats qs cross apply
sys.dm_exec_sql_text(qs.sql_handle) as qt
ORDER BY [Minutes Avg CPU Time] DESC
Index recalculations and trigger executions are not performed as part of a query. Index updates are part of maintenance activity and trigger executions are part of Insert/Update/Delete activity.
Generally speaking, there are no "server tasks" included in the calculations for the Top Queries report. A query execution plan is based on that query and the data statistics available at the start of the query compilation. The plan generated is independent of maintenance or IUD activity taking place on the server.
It is possible that other activity make cause the actual duration to increase, but that additional time is not directly attributable to the query. The query is just forced to wait while the other activity completes.
Does that help?
Here is modified query which shows top CPU time consumers.
It is not average, it is summarized.
Also it is grouped by query_plan_hash so same query with different params will be in one group.
Note 1: if query running frequently (~1 time every second) then it's statistics will be flushed every hour.
Note 2: User name will exist if only query is running at the moment
Note 3: If you need to keep stats for long time, you will need to store it somewhere separately. Also adding grouping by date will help with reporting
SELECT TOP 10
SUM(qs.total_worker_time)/(1000000) AS [CPU Time Seconds],
SUM(qs.execution_count) AS [Times Run],
qs.query_plan_hash AS [Hash],
MIN(creation_time) AS [Creation time],
MIN(qt.text) AS [Query],
MIN(USER_NAME(r.user_id)) AS [UserName]
FROM sys.dm_exec_query_stats qs CROSS apply
sys.dm_exec_sql_text(qs.sql_handle) AS qt
LEFT JOIN sys.dm_exec_requests AS r ON qs.query_plan_hash =
r.query_plan_hash
GROUP BY qs.query_plan_hash
ORDER BY [CPU Time Seconds] DESC

How do I figure out what is causing Data IO spikes on my Azure SQL database?

I have a Azure SQL production database that runs at around 10-20% DTU usage on average, however, I get DTU spikes that take it upwards of 100% at times. Here is a sample from the past 1 hour:
I realize this could be a rouge query, so I switched over to the Query Performance Insight tab, and I find the following from the past 24 hours:
This chart makes sense with regards to the CPU usage line. Query 3780 takes the majority of at CPU, as expected with my application. The Overall DTU (red) line seems to follow this correctly (minus the spikes).
However, in the DTU Components charts I can see large Data IO spikes occurring that coincide with the Overall DTU spikes. Switching over to the TOP 5 queries by Data IO, I see the following:
This seems to indicate that there are no queries that are using high amounts of Data IO.
How do I find out where this Data IO usage is coming from?
Finally, I see that there is this one, "odd ball" query (7966) listed under the TOP 5 queries by Data IO with only 5 executions. Selecting it shows the following:
SELECT StatMan([SC0], [SC1], [SC2], [SB0000])
FROM (SELECT TOP 100 PERCENT [SC0], [SC1], [SC2], step_direction([SC0]) over (order by NULL) AS [SB0000]
FROM (SELECT [UserId] AS [SC0], [Type] AS [SC1], [Id] AS [SC2] FROM [dbo].[Cipher] TABLESAMPLE SYSTEM (1.828756e+000 PERCENT)
WITH (READUNCOMMITTED) ) AS _MS_UPDSTATS_TBL_HELPER
ORDER BY [SC0], [SC1], [SC2], [SB0000] ) AS _MS_UPDSTATS_TBL
OPTION (MAXDOP 16)
What is this query?
This does not look like any query that my application has created/uses. The timestamps on the details chart seem to line up with the approximate times of the overall Data IO spikes (just prior to 6am) which leads me to think this query has something to do with all of this.
Are there any other tools can I use to help isolate this issue?
The query is updating statistics..this occurs when this setting AUTO UPDATE STATISTICS is on..This should be kept on and you can't turn it off..this is a best practice..
You should update stats manually only when when you see a query not performing well and stats are off for that query..
Also below are some rules when SQL will update stats automatically for you
When a table with no rows gets a row
When 500 rows are changed to a table that is less than 500 rows
When 20% + 500 are changed in a table greater than 500 rows
By ‘change’ we mean if a row is inserted, updated or deleted. So, yes, even the automatically-created statistics get updated and maintained as the data changes.There were some changes to these rules in recent versions and sql can update stats more often
References:
https://www.sqlskills.com/blogs/erin/understanding-when-statistics-will-automatically-update/
It seems that query is part of the automatic update of statistics process. To mitigate the impact of this process on production you can regularly update statistics and indexes using runbooks as explained here. Run sp_updatestats to immediately try to mitigate the impact of this process.

Performance problems temporarily fixed by sp_updatestats, despite daily sp_updatestats execution

I see a similar question here from 2013, but it was not answered so I am posting my version.
We are using SQL Server 2008 (SP4) - 10.0.6000.29 (X64) and have a database that is about 70GB in size with about 350 tables. On a daily basis, there are only a small number of updates occurring, though a couple times a year we dump a fair amount of data into it. There are several Windows Services that constantly query the database, but rarely updated it. There are also several websites that use it, and desktop applications (again, minimal daily updates).
The problem we have is that every once in a while a query that hits certain records will take much longer than normal. The following is a bogus example:
This query against 2 tables with less than 600 total records might take 30+ seconds:
select *
from our_program_access bpa
join our_user u on u.user_id = bpa.user_id
where u.user_id = 50 and program_name = 'SomeApp'
But when you change the user_id value to another user record, it takes less than one second:
select *
from our_program_access bpa
join our_user u on u.user_id = bpa.user_id
where u.user_id = 51 and program_name = 'SomeApp'
The real queries that are being used are a little more complex, but the idea is the same: search ID 50 takes 30+ seconds, search ID 51 takes < 1 second, but both return only 1 record out of about 600 total.
We have found that the issue seems related to the statistics. When this problem occurs, we run sp_updatestats, and all the queries are equal and fast in time. So, we started to run sp_updatestats in a maintainenance plan every night. But the problem still pops up. We also tried setting AUTO_UPDATE_STATISTICS_ASYNC on, but the problem eventually popped up.
While the database is large, it doesn't really undergo tremendous changes, though it does face constant queries from different services.
There are several other databases on the same server such as a mail log, SharePoint, and web filtering. Overall, performance is very good until we run into this problem.
Does it make sense that on a database that undergoes relatively small changes daily would need to run sp_updatstats so frequently? What else can we do to resolve this issue?

Determining which SQL Server database is spiking the CPU

We are running SQL Server 2008 with currently around 50 databases of varying size and workload. Occasionally SQL Server spikes the CPU completely for about a minute, after which it drops to normal baseline load.
My problem is that I can't determine which database or connection is causing it (I'm fairly sure it is one specific query that is missing an index - or something like that).
I have found T-SQL queries that gives you a frozen image of current processes. There are also the "recent expensive queries" view and of course the profiler, but it is hard to map to a "this is the database that is causing it" answer.
What makes it even harder is that the problem disappears before I have even fired up the profiler or activity monitor, and it only happens about once or twice a day.
Ideally I would like to use a performance counter so I could simply run it for a day or two and then take a look at what caused the spikes. I can however not find any relevant counter.
Any suggestions?
This will help, courtesy of Glenn Berry adapted from Robert Pearl:
WITH DB_CPU_Stats
AS
(SELECT DatabaseID, DB_Name(DatabaseID) AS [DatabaseName], SUM(total_worker_time) AS [CPU_Time_Ms]
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY (SELECT CONVERT(int, value) AS [DatabaseID]
FROM sys.dm_exec_plan_attributes(qs.plan_handle)
WHERE attribute = N'dbid') AS F_DB
GROUP BY DatabaseID)
SELECT ROW_NUMBER() OVER(ORDER BY [CPU_Time_Ms] DESC) AS [row_num],
DatabaseName, [CPU_Time_Ms],
CAST([CPU_Time_Ms] * 1.0 / SUM([CPU_Time_Ms]) OVER() * 100.0 AS DECIMAL(5, 2)) AS [CPUPercent]
FROM DB_CPU_Stats
WHERE DatabaseID > 4 -- system databases
AND DatabaseID <> 32767 -- ResourceDB
ORDER BY row_num OPTION (RECOMPILE);
Run a profiler trace logging the database name and cpu during a spike, load the data up into a table, count and group on db.
select DatabaseName, sum(CPU) from Trace
group by DatabaseName
Have a look at sys.dm_exec_query_stats. The total_worker_time column is a measure of CPU. You may be able to accomplish what you're trying to do in one look at the view. You may, however, need to come up with a process to take "snapshots" of the view and compare successive snapshots. That is, look at the data in the view and compare it to five minutes later and compare the differences. The differences will the the amount of resources consumed between the two snapshots. Good luck!
Have you tried relating SQL Server Profiler to Performance Monitor?
When you correlate the data, you can see spikes in performance related to DB activity.
http://www.sqlservernation.com/home/relating-sql-server-profiler-with-performance-monitor.html

Resources