Map SQL Server Commands - sql-server

I'm new with SQL Server (already used MySQL on Linux though). I have a database that is always using the maximum limit on my Windows Server. I've already limited the amount of maximum memory, but my application is slow.
How do I find out which SQL command/string is using my memory?

Real time memory usage tracking can be done by setting up counters in Perfmon. It's a bit cumbersome to set those up, so if you're OK with running a query to see a fixed point in time, then Glenn Berry has a very comprehensive set of memory diagnostic queries that I use on our SQL Server 2012 database.
https://sqlserverperformance.wordpress.com/2010/10/08/sql-server-memory-related-queries/
The specific query to use in your situation depends on if you're looking for a query or a stored procedure.
Looking for a query:
-- Find single-use, ad-hoc queries that are bloating the plan cache
SELECT TOP(100) [text], cp.size_in_bytes
FROM sys.dm_exec_cached_plans AS cp
CROSS APPLY sys.dm_exec_sql_text(plan_handle)
WHERE cp.cacheobjtype = N'Compiled Plan'
AND cp.objtype = N'Adhoc'
AND cp.usecounts = 1
ORDER BY cp.size_in_bytes DESC;
Looking for a stored procedure:
-- Top Cached SPs By Total Logical Reads (SQL 2008 and 2008 R2). Logical reads relate to memory pressure
SELECT TOP(25) p.name AS [SP Name], qs.total_logical_reads AS [TotalLogicalReads],
qs.total_logical_reads/qs.execution_count AS [AvgLogicalReads],qs.execution_count,
ISNULL(qs.execution_count/DATEDIFF(Second, qs.cached_time, GETDATE()), 0) AS [Calls/Second],
qs.total_elapsed_time, qs.total_elapsed_time/qs.execution_count
AS [avg_elapsed_time], qs.cached_time
FROM sys.procedures AS p
INNER JOIN sys.dm_exec_procedure_stats AS qs
ON p.[object_id] = qs.[object_id]
WHERE qs.database_id = DB_ID()
ORDER BY qs.total_logical_reads DESC;

Related

How to check which query or user caused SQL Server 100% CPU Utilization

My SQL server 2016 database had 100% CPU Utilization issue at 06:30. Now I want to check which query or user caused this issue. I searched some articles online. They are all showing how to check it when the query is running. But what I want is to check history at 06:30. The database is fine now. So if I check the performance now, I can't find any query/job are running now. How do I check the history to identify the cause?
from ssms, run the activity monitor for the server and look at recent expensive queries.
You could try to investigate the top expensive parallel execution queries from sys.dm_exec_query_stat dmv, where max_worker_time is maximum CPU time.
Select Top 100 O.type_desc, O.name, S.creation_time, S.last_execution_time, S.min_elapsed_time, S.max_elapsed_time,
S.Last_elapsed_time, S.Min_worker_time, S.Max_worker_time, S.Last_worker_time, S.execution_count,
SUBSTRING (T.text, (S.statement_start_offset/2)+1, ((CASE WHEN S.statement_end_offset = -1 THEN DATALENGTH(T.text) ELSE S.statement_end_offset END - S.statement_start_offset)/2)+1)
From sys.dm_exec_query_stats As S Cross Apply sys.dm_exec_sql_text(S.sql_handle) as T
Cross Apply (Select type_desc, name From sys.objects Where object_id=T.objectid) as O
WHERE T.dbid = DB_ID() And
S.Max_worker_time > S.Max_elapsed_time
Order by S.Max_worker_time Desc

Get all count of stored procedures executed of database in last 24 hours

I want to know how many stored procedures and how much time executed for data fetching in last 24 hours in SQL Server?
How to Get the Execution Count of a Stored Procedure?
To determine how many times a stored procedure in the cache has been executed you need to use a couple of DMV’s and a dynamic management function (DMF). The plan_handle for the cached plans are used to join together the DMV’s and retrieve records for a DMF. To get the execution counts for each cached SPs you can run the following code:
SELECT DB_NAME(st.dbid) DBName
,OBJECT_SCHEMA_NAME(st.objectid,dbid) SchemaName
,OBJECT_NAME(st.objectid,dbid) StoredProcedure
,max(cp.usecounts) Execution_count
FROM sys.dm_exec_cached_plans cp
CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) st
where DB_NAME(st.dbid) is not null and cp.objtype = 'proc'
group by cp.plan_handle, DB_NAME(st.dbid),
OBJECT_SCHEMA_NAME(objectid,st.dbid),
OBJECT_NAME(objectid,st.dbid)
order by max(cp.usecounts)
Here I used the plan_handle of the “sys.dm_exec_cached_plans” DMV to obtain the object type. I used the object type to identify stored procedure cached plans. The “usecounts” column of the “sys.dm_exec_cached_plans” DMV identifies the number of times each statement with a cached_plan (or SP) has been executed since the last compilation of the SP, I called this “Execution_count”. I use the plan_handle in conjunction with the CROSS APPLY operator to return the object information (DBName, SchemaName, and ObjectName) using the table-value DMF “sys.dm_exec_sql_text”. The output from this SELECT statement is ordered by the “Execution_count”, so the SP with the most executions will be displayed first.
Determining Which SP is using the Most CPU, I/O, or has the Longest Duration.
Knowing which SPs are frequently executed is useful information, although from a performance standpoint you might like to know which SP is consuming the greatest amount of CPU resources. Or possibly you might be interested in which SP takes the longest to run, or which SP performs the most physical I/O operations? By modifying the above command, we can easily answer each one of these questions.
If you want to show the SP that consumes the most CPU resources you can run the following TSQL command:
SELECT DB_NAME(st.dbid) DBName
,OBJECT_SCHEMA_NAME(st.objectid,dbid) SchemaName
,OBJECT_NAME(st.objectid,dbid) StoredProcedure
,max(cp.usecounts) Execution_count
,sum(qs.total_worker_time) total_cpu_time
,sum(qs.total_worker_time) / (max(cp.usecounts) * 1.0) avg_cpu_time
FROM sys.dm_exec_cached_plans cp join sys.dm_exec_query_stats qs on cp.plan_handle = qs.plan_handle
CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) st
where DB_NAME(st.dbid) is not null and cp.objtype = 'proc'
group by DB_NAME(st.dbid),OBJECT_SCHEMA_NAME(objectid,st.dbid), OBJECT_NAME(objectid,st.dbid)
order by sum(qs.total_worker_time) desc
The “sys.dm_exec_query_stats” view contains the column “total_worker_time”, which is the total number of microseconds that a given cached query plan has executed. Keep in mind that cached plans are sometimes removed from memory and replaced with newer ones. Therefore, the statistics for which SP has consumed the most CPU only takes into account statistics for those plans that are in the cache when this T-SQL is run.
To determine which SP has executed the most I/O requests you can run the following TSQL code:
SELECT DB_NAME(st.dbid) DBName
,OBJECT_SCHEMA_NAME(objectid,st.dbid) SchemaName
,OBJECT_NAME(objectid,st.dbid) StoredProcedure
,max(cp.usecounts) execution_count
,sum(qs.total_physical_reads + qs.total_logical_reads + qs.total_logical_writes) total_IO
,sum(qs.total_physical_reads + qs.total_logical_reads + qs.total_logical_writes) / (max(cp.usecounts)) avg_total_IO
,sum(qs.total_physical_reads) total_physical_reads
,sum(qs.total_physical_reads) / (max(cp.usecounts) * 1.0) avg_physical_read
,sum(qs.total_logical_reads) total_logical_reads
,sum(qs.total_logical_reads) / (max(cp.usecounts) * 1.0) avg_logical_read
,sum(qs.total_logical_writes) total_logical_writes
,sum(qs.total_logical_writes) / (max(cp.usecounts) * 1.0) avg_logical_writes
FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.plan_handle) st
join sys.dm_exec_cached_plans cp on qs.plan_handle = cp.plan_handle
where DB_NAME(st.dbid) is not null and cp.objtype = 'proc'
group by DB_NAME(st.dbid),OBJECT_SCHEMA_NAME(objectid,st.dbid), OBJECT_NAME(objectid,st.dbid)
order by sum(qs.total_physical_reads + qs.total_logical_reads + qs.total_logical_writes) desc
Here I have displayed the total physical and logical read I/O’s, plus the logical write I/O’s. In addition, I have calculated the average number of I/O’s per execution of each SP. Physical reads are the number of reads that are actually made against the physical disk drives; where as logical reads and writes are the number of I/O’s against the cached data pages in memory in the buffer cache. Therefore, by adding the physical and logical I/O’s together I was able to calculate the total I/O’s for each SP.
To determine which SP’s take the longest time to execute I can use the follow TSQL code:
SELECT DB_NAME(st.dbid) DBName
,OBJECT_SCHEMA_NAME(objectid,st.dbid) SchemaName
,OBJECT_NAME(objectid,st.dbid) StoredProcedure
,max(cp.usecounts) execution_count
,sum(qs.total_elapsed_time) total_elapsed_time
,sum(qs.total_elapsed_time) / max(cp.usecounts) avg_elapsed_time
FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.plan_handle) st
join sys.dm_exec_cached_plans cp on qs.plan_handle = cp.plan_handle
where DB_NAME(st.dbid) is not null and cp.objtype = 'proc'
group by DB_NAME(st.dbid),OBJECT_SCHEMA_NAME(objectid,st.dbid), OBJECT_NAME(objectid,st.dbid)
order by sum(qs.total_elapsed_time) desc
In this TSQL, I am calculating the duration of each SP by summing up the “total_elapsed_time” in the “sys.dm_exec_sql_query_stats” DMV by database, schema and object name. I am also calculating the average elapsed time per execution of each SP. I order the output so the SP that took the longest total duration will be displayed first. If you where interested in determining the SP that had the longest average duration all you would need to change is the “ORDER BY” clause to sort by “avg_elapsed_time”.
reference

SQL Server 2008 High CPU usage

We have a Windows 2008 R2 Server with a SQL Server 2008 in it. That server has multiple .net sites that have SQL server databases on it.
We are currently experiencing an average CPU usage of 95%, and SQL Server is responsible for most of that usage.
I would like to identify which site is responsible for this so we can either optmize it or move it to another server. But I didnt find any direct way of looking this.
I've been looking if I could find:
The database that is getting the most CPU intensive queries
The process that is responsible for CPU intensive queries
One thing that also complicates the things is that we have multiple sites and "crons" using the same database. So once I identify the database I would also need to get some hints of which site/cron is responsible for it.
I would really appreciate any help on this as this issue is making our sites really slow...
Thanks
You can identify costly queries (and the databases they are associated with) using the DMVs, e.g. from this TechNet article:
SELECT TOP 50
[Average CPU used] = total_worker_time / qs.execution_count,
[Total CPU used] = total_worker_time,
[Execution count] = qs.execution_count,
[Individual Query] = SUBSTRING (qt.text,qs.statement_start_offset/2,
(CASE WHEN qs.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
ELSE qs.statement_end_offset END -
qs.statement_start_offset)/2)
,[Parent Query] = qt.text
,DatabaseName = DB_NAME(qt.dbid)
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
ORDER BY [Average CPU used] DESC;
These will tell you about heavy hitter queries, but unfortunately it won't pinpoint a database that might have very high volume of small queries that are using small bits of CPU individually but large bits in aggregate. You can do that with this query from Glenn Allan Berry's DMV queries:
WITH DB_CPU_Stats
AS
(SELECT DatabaseID, DB_Name(DatabaseID) AS [DatabaseName],
SUM(total_worker_time) AS [CPU_Time_Ms]
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY (SELECT CONVERT(int, value) AS [DatabaseID]
FROM sys.dm_exec_plan_attributes(qs.plan_handle)
WHERE attribute = N'dbid') AS F_DB
GROUP BY DatabaseID)
SELECT ROW_NUMBER() OVER(ORDER BY [CPU_Time_Ms] DESC) AS [row_num],
DatabaseName, [CPU_Time_Ms],
CAST([CPU_Time_Ms] * 1.0 / SUM([CPU_Time_Ms])
OVER() * 100.0 AS DECIMAL(5, 2)) AS [CPUPercent]
FROM DB_CPU_Stats
WHERE DatabaseID > 4 -- system databases
AND DatabaseID <> 32767 -- ResourceDB
ORDER BY row_num OPTION (RECOMPILE);
Neither of these queries identifies the application that ran them, and the DMVs used don't store that information (you would have to catch the queries in the act and note the application name in sys.dm_exec_sessions, or review a trace).
Of course you can automate this work with a variety of 3rd party performance tools on the market (disclaimer: I work for one of them, SQL Sentry, who produces Performance Advisor, which does all of the above, including keeping track of high-cost queries and maintaining the information about which database they ran in and what application called them).
Alternatively you can use the Activity monitor to view the health status of your servers.
From there you will be able to single out long running queries, any locks such as row lock, table lock etc etc
http://www.mssqltips.com/sqlservertip/1917/performance-analysis-using-sql-server-2008-activity-monitor-tool/

Database running at 100%

My database server is using 100% of the processing all of a sudden and it doesn't seem like I have any more traffic on my sites. What can I do to look where the increased traffic is coming from inside of SQL Server Management Studio?
A typical cause of high processor usage is unoptimized queries, from stored procedures or otherwise. I would take a look at what stored procedures you have running frequently and make sure that they are running efficiently and are properly indexed.
Some examples of this and resources:
http://msmvps.com/blogs/omar/archive/2006/08/15/100-cpu-100-io-a-near-death-experience-for-sql-server-2005-and-us.aspx
http://www.aneef.net/2009/04/11/100-cpu-sql-server-2005-servers-crashes/
An article on Performance tuning:
http://207.46.16.252/en-us/magazine/2007.10.sqlcpu.aspx
although I use this query to quickly check for blocking (with low overhead), you can use it to see how many processes you have running, each process's cpu time, reads, writes, and logical_reads:
;with Blockers AS
( SELECT
r.session_id AS spid
,r.blocking_session_id AS BlockingSPID
,LEFT(OBJECT_NAME(st.objectid, st.dbid),50) AS ShortObjectName
,LEFT(DB_NAME(r.database_id),50) AS DatabaseName
,s.program_name
,s.login_name
,r.cpu_time,r.reads,r.writes,r.logical_reads
,OBJECT_NAME(st.objectid, st.dbid) AS ObjectName
,SUBSTRING(st.text, (r.statement_start_offset/2)+1,( (CASE r.statement_end_offset
WHEN -1 THEN DATALENGTH(st.text)
ELSE r.statement_end_offset
END - r.statement_start_offset
)/2
) + 1
) AS SQLText
FROM sys.dm_exec_requests r
JOIN sys.dm_exec_sessions s ON r.session_id = s.session_id
CROSS APPLY sys.dm_exec_sql_text (sql_handle) st
--WHERE r.session_id > 50
)
SELECT Blockers.* FROM Blockers WHERE spid!=##SPID
Try checking the amount of free space you have in your files. It may well be that SQL is off increasing the size of your mdf/ldf
If you haven't got much free space set to increase by fixed size rather than %
Take a look at the sql server activity monitor, specifically the cpu column. maybe you have one or more queries running wild.

CPU utilization by database?

Is it possible to get a breakdown of CPU utilization by database?
I'm ideally looking for a Task Manager type interface for SQL server, but instead of looking at the CPU utilization of each PID (like taskmgr) or each SPID (like spwho2k5), I want to view the total CPU utilization of each database. Assume a single SQL instance.
I realize that tools could be written to collect this data and report on it, but I'm wondering if there is any tool that lets me see a live view of which databases are contributing most to the sqlservr.exe CPU load.
Sort of. Check this query out:
SELECT total_worker_time/execution_count AS AvgCPU
, total_worker_time AS TotalCPU
, total_elapsed_time/execution_count AS AvgDuration
, total_elapsed_time AS TotalDuration
, (total_logical_reads+total_physical_reads)/execution_count AS AvgReads
, (total_logical_reads+total_physical_reads) AS TotalReads
, execution_count
, SUBSTRING(st.TEXT, (qs.statement_start_offset/2)+1
, ((CASE qs.statement_end_offset WHEN -1 THEN datalength(st.TEXT)
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2) + 1) AS txt
, query_plan
FROM sys.dm_exec_query_stats AS qs
cross apply sys.dm_exec_sql_text(qs.sql_handle) AS st
cross apply sys.dm_exec_query_plan (qs.plan_handle) AS qp
ORDER BY 1 DESC
This will get you the queries in the plan cache in order of how much CPU they've used up. You can run this periodically, like in a SQL Agent job, and insert the results into a table to make sure the data persists beyond reboots.
When you read the results, you'll probably realize why we can't correlate that data directly back to an individual database. First, a single query can also hide its true database parent by doing tricks like this:
USE msdb
DECLARE #StringToExecute VARCHAR(1000)
SET #StringToExecute = 'SELECT * FROM AdventureWorks.dbo.ErrorLog'
EXEC #StringToExecute
The query would be executed in MSDB, but it would poll results from AdventureWorks. Where should we assign the CPU consumption?
It gets worse when you:
Join between multiple databases
Run a transaction in multiple databases, and the locking effort spans multiple databases
Run SQL Agent jobs in MSDB that "work" in MSDB, but back up individual databases
It goes on and on. That's why it makes sense to performance tune at the query level instead of the database level.
In SQL Server 2008R2, Microsoft introduced performance management and app management features that will let us package a single database in a distributable and deployable DAC pack, and they're promising features to make it easier to manage performance of individual databases and their applications. It still doesn't do what you're looking for, though.
For more of those, check out the T-SQL repository at Toad World's SQL Server wiki (formerly at SQLServerPedia).
Updated on 1/29 to include total numbers instead of just averages.
SQL Server (starting with 2000) will install performance counters (viewable from Performance Monitor or Perfmon).
One of the counter categories (from a SQL Server 2005 install is:)
- SQLServer:Databases
With one instance for each database. The counters available however do not provide a CPU % Utilization counter or something similar, although there are some rate counters, that you could use to get a good estimate of CPU. Example would be, if you have 2 databases, and the rate measured is 20 transactions/sec on database A and 80 trans/sec on database B --- then you would know that A contributes roughly to 20% of the total CPU, and B contributes to other 80%.
There are some flaws here, as that's assuming all the work being done is CPU bound, which of course with databases it's not. But that would be a start I believe.
Here's a query that will show the actual database causing high load. It relies on the query cache which might get flushed frequently in low-memory scenarios (making the query less useful).
select dbs.name, cacheobjtype, total_cpu_time, total_execution_count from
(select top 10
sum(qs.total_worker_time) as total_cpu_time,
sum(qs.execution_count) as total_execution_count,
count(*) as number_of_statements,
qs.plan_handle
from
sys.dm_exec_query_stats qs
group by qs.plan_handle
order by sum(qs.total_worker_time) desc
) a
inner join
(SELECT plan_handle, pvt.dbid, cacheobjtype
FROM (
SELECT plan_handle, epa.attribute, epa.value, cacheobjtype
FROM sys.dm_exec_cached_plans
OUTER APPLY sys.dm_exec_plan_attributes(plan_handle) AS epa
/* WHERE cacheobjtype = 'Compiled Plan' AND objtype = 'adhoc' */) AS ecpa
PIVOT (MAX(ecpa.value) FOR ecpa.attribute IN ("dbid", "sql_handle")) AS pvt
) b on a.plan_handle = b.plan_handle
inner join sys.databases dbs on dbid = dbs.database_id
I think the answer to your question is no.
The issue is that one activity on a machine can cause load on multiple databases. If I have a process that is reading from a config DB, logging to a logging DB, and moving transactions in and out of various DBs based on type, how do I partition the CPU usage?
You could divide CPU utilization by the transaction load, but that is again a rough metric that may mislead you. How would you divide transaction log shipping from one DB to another, for instance? Is the CPU load in the reading or the writing?
You're better off looking at the transaction rate for a machine and the CPU load it causes. You could also profile stored procedures and see if any of them are taking an inordinate amount of time; however, this won't get you the answer you want.
With all said above in mind.
Starting with SQL Server 2012 (may be 2008 ?) , there is column database_id in sys.dm_exec_sessions.
It gives us easy calculation of cpu for each database for currently connected sessions. If session have disconnected, then its results have gone.
select session_id, cpu_time, program_name, login_name, database_id
from sys.dm_exec_sessions
where session_id > 50;
select sum(cpu_time)/1000 as cpu_seconds, database_id
from sys.dm_exec_sessions
group by database_id
order by cpu_seconds desc;
Take a look at SQL Sentry. It does all you need and more.
Regards,
Lieven
Have you looked at SQL profiler?
Take the standard "T-SQL" or "Stored Procedure" template, tweak the fields to group by the database ID (I think you have to used the number, you dont get the database name, but it's easy to find out using exec sp_databases to get the list)
Run this for a while and you'll get the total CPU counts / Disk IO / Wait etc. This can give you the proportion of CPU used by each database.
If you monitor the PerfMon counter at the same time (log the data to a SQL database), and do the same for the SQL Profiler (log to database), you may be able to correlate the two together.
Even so, it should give you enough of a clue as to which DB is worth looking at in more detail. Then, do the same again with just that database ID and look for the most expensive SQL / Stored Procedures.
please check this query:
SELECT
DB_NAME(st.dbid) AS DatabaseName
,OBJECT_SCHEMA_NAME(st.objectid,dbid) AS SchemaName
,cp.objtype AS ObjectType
,OBJECT_NAME(st.objectid,dbid) AS Objects
,MAX(cp.usecounts)AS Total_Execution_count
,SUM(qs.total_worker_time) AS Total_CPU_Time
,SUM(qs.total_worker_time) / (max(cp.usecounts) * 1.0) AS Avg_CPU_Time
FROM sys.dm_exec_cached_plans cp
INNER JOIN sys.dm_exec_query_stats qs
ON cp.plan_handle = qs.plan_handle
CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) st
WHERE DB_NAME(st.dbid) IS NOT NULL
GROUP BY DB_NAME(st.dbid),OBJECT_SCHEMA_NAME(objectid,st.dbid),cp.objtype,OBJECT_NAME(objectid,st.dbid)
ORDER BY sum(qs.total_worker_time) desc

Resources